00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 143 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3645 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.156 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.162 The recommended git tool is: git 00:00:00.162 using credential 00000000-0000-0000-0000-000000000002 00:00:00.167 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.207 Fetching changes from the remote Git repository 00:00:00.209 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.244 Using shallow fetch with depth 1 00:00:00.244 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.244 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.551 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.562 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.571 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.571 > git config core.sparsecheckout # timeout=10 00:00:08.580 > git read-tree -mu HEAD # timeout=10 00:00:08.592 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.609 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.609 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.690 [Pipeline] Start of Pipeline 00:00:08.701 [Pipeline] library 00:00:08.702 Loading library shm_lib@master 00:00:08.702 Library shm_lib@master is cached. Copying from home. 00:00:08.716 [Pipeline] node 00:00:08.735 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.737 [Pipeline] { 00:00:08.748 [Pipeline] catchError 00:00:08.750 [Pipeline] { 00:00:08.761 [Pipeline] wrap 00:00:08.768 [Pipeline] { 00:00:08.775 [Pipeline] stage 00:00:08.777 [Pipeline] { (Prologue) 00:00:08.793 [Pipeline] echo 00:00:08.794 Node: VM-host-SM38 00:00:08.800 [Pipeline] cleanWs 00:00:08.815 [WS-CLEANUP] Deleting project workspace... 00:00:08.815 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.827 [WS-CLEANUP] done 00:00:09.044 [Pipeline] setCustomBuildProperty 00:00:09.128 [Pipeline] httpRequest 00:00:09.442 [Pipeline] echo 00:00:09.443 Sorcerer 10.211.164.20 is alive 00:00:09.450 [Pipeline] retry 00:00:09.451 [Pipeline] { 00:00:09.460 [Pipeline] httpRequest 00:00:09.463 HttpMethod: GET 00:00:09.464 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.464 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.487 Response Code: HTTP/1.1 200 OK 00:00:09.487 Success: Status code 200 is in the accepted range: 200,404 00:00:09.488 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.521 [Pipeline] } 00:00:25.543 [Pipeline] // retry 00:00:25.552 [Pipeline] sh 00:00:25.843 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.863 [Pipeline] httpRequest 00:00:26.265 [Pipeline] echo 00:00:26.267 Sorcerer 10.211.164.20 is alive 00:00:26.278 [Pipeline] retry 00:00:26.280 [Pipeline] { 00:00:26.296 [Pipeline] httpRequest 00:00:26.301 HttpMethod: GET 00:00:26.302 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.302 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:26.314 Response Code: HTTP/1.1 200 OK 00:00:26.314 Success: Status code 200 is in the accepted range: 200,404 00:00:26.315 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:23.372 [Pipeline] } 00:01:23.390 [Pipeline] // retry 00:01:23.398 [Pipeline] sh 00:01:23.685 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:26.275 [Pipeline] sh 00:01:26.561 + git -C spdk log --oneline -n5 00:01:26.561 b18e1bd62 version: v24.09.1-pre 00:01:26.561 19524ad45 version: v24.09 00:01:26.561 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:26.561 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:26.561 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:26.584 [Pipeline] withCredentials 00:01:26.597 > git --version # timeout=10 00:01:26.611 > git --version # 'git version 2.39.2' 00:01:26.631 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.633 [Pipeline] { 00:01:26.643 [Pipeline] retry 00:01:26.645 [Pipeline] { 00:01:26.663 [Pipeline] sh 00:01:26.948 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:41.876 [Pipeline] } 00:01:41.893 [Pipeline] // retry 00:01:41.898 [Pipeline] } 00:01:41.911 [Pipeline] // withCredentials 00:01:41.919 [Pipeline] httpRequest 00:01:43.669 [Pipeline] echo 00:01:43.671 Sorcerer 10.211.164.20 is alive 00:01:43.681 [Pipeline] retry 00:01:43.683 [Pipeline] { 00:01:43.695 [Pipeline] httpRequest 00:01:43.701 HttpMethod: GET 00:01:43.701 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.702 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.726 Response Code: HTTP/1.1 200 OK 00:01:43.727 Success: Status code 200 is in the accepted range: 200,404 00:01:43.727 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:11.850 [Pipeline] } 00:02:11.924 [Pipeline] // retry 00:02:11.932 [Pipeline] sh 00:02:12.211 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:13.693 [Pipeline] sh 00:02:13.980 + git -C dpdk log --oneline -n5 00:02:13.980 caf0f5d395 version: 22.11.4 00:02:13.980 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:13.980 dc9c799c7d vhost: fix missing spinlock unlock 00:02:13.980 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:13.980 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:14.000 [Pipeline] writeFile 00:02:14.015 [Pipeline] sh 00:02:14.301 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:14.315 [Pipeline] sh 00:02:14.601 + cat autorun-spdk.conf 00:02:14.601 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.601 SPDK_TEST_NVME=1 00:02:14.601 SPDK_TEST_FTL=1 00:02:14.601 SPDK_TEST_ISAL=1 00:02:14.601 SPDK_RUN_ASAN=1 00:02:14.601 SPDK_RUN_UBSAN=1 00:02:14.601 SPDK_TEST_XNVME=1 00:02:14.601 SPDK_TEST_NVME_FDP=1 00:02:14.601 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.601 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.601 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.610 RUN_NIGHTLY=1 00:02:14.612 [Pipeline] } 00:02:14.625 [Pipeline] // stage 00:02:14.641 [Pipeline] stage 00:02:14.643 [Pipeline] { (Run VM) 00:02:14.656 [Pipeline] sh 00:02:14.942 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:14.942 + echo 'Start stage prepare_nvme.sh' 00:02:14.942 Start stage prepare_nvme.sh 00:02:14.942 + [[ -n 7 ]] 00:02:14.942 + disk_prefix=ex7 00:02:14.942 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:14.942 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:14.942 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:14.942 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.942 ++ SPDK_TEST_NVME=1 00:02:14.942 ++ SPDK_TEST_FTL=1 00:02:14.942 ++ SPDK_TEST_ISAL=1 00:02:14.942 ++ SPDK_RUN_ASAN=1 00:02:14.942 ++ SPDK_RUN_UBSAN=1 00:02:14.942 ++ SPDK_TEST_XNVME=1 00:02:14.942 ++ SPDK_TEST_NVME_FDP=1 00:02:14.942 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.942 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.942 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.942 ++ RUN_NIGHTLY=1 00:02:14.942 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:14.942 + nvme_files=() 00:02:14.942 + declare -A nvme_files 00:02:14.942 + backend_dir=/var/lib/libvirt/images/backends 00:02:14.942 + nvme_files['nvme.img']=5G 00:02:14.942 + nvme_files['nvme-cmb.img']=5G 00:02:14.942 + nvme_files['nvme-multi0.img']=4G 00:02:14.942 + nvme_files['nvme-multi1.img']=4G 00:02:14.942 + nvme_files['nvme-multi2.img']=4G 00:02:14.942 + nvme_files['nvme-openstack.img']=8G 00:02:14.942 + nvme_files['nvme-zns.img']=5G 00:02:14.942 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:14.942 + (( SPDK_TEST_FTL == 1 )) 00:02:14.942 + nvme_files["nvme-ftl.img"]=6G 00:02:14.942 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:14.942 + nvme_files["nvme-fdp.img"]=1G 00:02:14.942 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:14.942 + for nvme in "${!nvme_files[@]}" 00:02:14.942 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:02:14.942 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:14.942 + for nvme in "${!nvme_files[@]}" 00:02:14.942 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:02:14.942 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:14.942 + for nvme in "${!nvme_files[@]}" 00:02:14.942 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:02:14.942 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:14.942 + for nvme in "${!nvme_files[@]}" 00:02:14.942 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:02:14.942 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:14.942 + for nvme in "${!nvme_files[@]}" 00:02:14.942 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:02:14.942 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:15.205 + for nvme in "${!nvme_files[@]}" 00:02:15.205 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:02:15.205 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:15.205 + for nvme in "${!nvme_files[@]}" 00:02:15.205 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:02:15.205 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:15.205 + for nvme in "${!nvme_files[@]}" 00:02:15.205 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:02:15.205 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:15.205 + for nvme in "${!nvme_files[@]}" 00:02:15.205 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:02:15.466 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:15.466 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:02:15.728 + echo 'End stage prepare_nvme.sh' 00:02:15.728 End stage prepare_nvme.sh 00:02:15.742 [Pipeline] sh 00:02:16.031 + DISTRO=fedora39 00:02:16.031 + CPUS=10 00:02:16.031 + RAM=12288 00:02:16.031 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:16.031 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:16.031 00:02:16.031 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:16.031 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:16.031 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:16.031 HELP=0 00:02:16.031 DRY_RUN=0 00:02:16.031 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:02:16.031 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:16.031 NVME_AUTO_CREATE=0 00:02:16.031 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:02:16.031 NVME_CMB=,,,, 00:02:16.031 NVME_PMR=,,,, 00:02:16.031 NVME_ZNS=,,,, 00:02:16.031 NVME_MS=true,,,, 00:02:16.031 NVME_FDP=,,,on, 00:02:16.031 SPDK_VAGRANT_DISTRO=fedora39 00:02:16.031 SPDK_VAGRANT_VMCPU=10 00:02:16.031 SPDK_VAGRANT_VMRAM=12288 00:02:16.031 SPDK_VAGRANT_PROVIDER=libvirt 00:02:16.031 SPDK_VAGRANT_HTTP_PROXY= 00:02:16.031 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:16.031 SPDK_OPENSTACK_NETWORK=0 00:02:16.031 VAGRANT_PACKAGE_BOX=0 00:02:16.031 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:16.031 FORCE_DISTRO=true 00:02:16.031 VAGRANT_BOX_VERSION= 00:02:16.031 EXTRA_VAGRANTFILES= 00:02:16.031 NIC_MODEL=e1000 00:02:16.031 00:02:16.031 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:16.031 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:18.579 Bringing machine 'default' up with 'libvirt' provider... 00:02:19.152 ==> default: Creating image (snapshot of base box volume). 00:02:19.152 ==> default: Creating domain with the following settings... 00:02:19.152 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732016012_e39cdb6df9d28e04df5b 00:02:19.152 ==> default: -- Domain type: kvm 00:02:19.152 ==> default: -- Cpus: 10 00:02:19.152 ==> default: -- Feature: acpi 00:02:19.152 ==> default: -- Feature: apic 00:02:19.152 ==> default: -- Feature: pae 00:02:19.152 ==> default: -- Memory: 12288M 00:02:19.152 ==> default: -- Memory Backing: hugepages: 00:02:19.152 ==> default: -- Management MAC: 00:02:19.152 ==> default: -- Loader: 00:02:19.152 ==> default: -- Nvram: 00:02:19.152 ==> default: -- Base box: spdk/fedora39 00:02:19.152 ==> default: -- Storage pool: default 00:02:19.152 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732016012_e39cdb6df9d28e04df5b.img (20G) 00:02:19.152 ==> default: -- Volume Cache: default 00:02:19.152 ==> default: -- Kernel: 00:02:19.152 ==> default: -- Initrd: 00:02:19.152 ==> default: -- Graphics Type: vnc 00:02:19.152 ==> default: -- Graphics Port: -1 00:02:19.152 ==> default: -- Graphics IP: 127.0.0.1 00:02:19.152 ==> default: -- Graphics Password: Not defined 00:02:19.152 ==> default: -- Video Type: cirrus 00:02:19.152 ==> default: -- Video VRAM: 9216 00:02:19.152 ==> default: -- Sound Type: 00:02:19.152 ==> default: -- Keymap: en-us 00:02:19.152 ==> default: -- TPM Path: 00:02:19.152 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:19.152 ==> default: -- Command line args: 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:19.152 ==> default: -> value=-drive, 00:02:19.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:19.152 ==> default: -> value=-drive, 00:02:19.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:19.152 ==> default: -> value=-drive, 00:02:19.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:19.152 ==> default: -> value=-drive, 00:02:19.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:19.152 ==> default: -> value=-drive, 00:02:19.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:19.152 ==> default: -> value=-drive, 00:02:19.152 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:19.152 ==> default: -> value=-device, 00:02:19.152 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:19.414 ==> default: Creating shared folders metadata... 00:02:19.414 ==> default: Starting domain. 00:02:21.966 ==> default: Waiting for domain to get an IP address... 00:02:40.098 ==> default: Waiting for SSH to become available... 00:02:40.098 ==> default: Configuring and enabling network interfaces... 00:02:42.644 default: SSH address: 192.168.121.14:22 00:02:42.644 default: SSH username: vagrant 00:02:42.644 default: SSH auth method: private key 00:02:44.545 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:49.829 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:55.200 ==> default: Mounting SSHFS shared folder... 00:02:55.767 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:55.767 ==> default: Checking Mount.. 00:02:57.139 ==> default: Folder Successfully Mounted! 00:02:57.139 00:02:57.139 SUCCESS! 00:02:57.139 00:02:57.139 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:57.139 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:57.139 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:57.139 00:02:57.146 [Pipeline] } 00:02:57.161 [Pipeline] // stage 00:02:57.170 [Pipeline] dir 00:02:57.171 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:57.172 [Pipeline] { 00:02:57.184 [Pipeline] catchError 00:02:57.186 [Pipeline] { 00:02:57.197 [Pipeline] sh 00:02:57.473 + vagrant ssh-config --host vagrant 00:02:57.473 + sed -ne '/^Host/,$p' 00:02:57.473 + tee ssh_conf 00:03:00.002 Host vagrant 00:03:00.002 HostName 192.168.121.14 00:03:00.002 User vagrant 00:03:00.002 Port 22 00:03:00.002 UserKnownHostsFile /dev/null 00:03:00.002 StrictHostKeyChecking no 00:03:00.002 PasswordAuthentication no 00:03:00.002 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:00.002 IdentitiesOnly yes 00:03:00.002 LogLevel FATAL 00:03:00.002 ForwardAgent yes 00:03:00.002 ForwardX11 yes 00:03:00.002 00:03:00.015 [Pipeline] withEnv 00:03:00.018 [Pipeline] { 00:03:00.033 [Pipeline] sh 00:03:00.311 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:03:00.311 source /etc/os-release 00:03:00.311 [[ -e /image.version ]] && img=$(< /image.version) 00:03:00.311 # Minimal, systemd-like check. 00:03:00.311 if [[ -e /.dockerenv ]]; then 00:03:00.311 # Clear garbage from the node'\''s name: 00:03:00.311 # agt-er_autotest_547-896 -> autotest_547-896 00:03:00.311 # $HOSTNAME is the actual container id 00:03:00.311 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:00.311 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:00.311 # We can assume this is a mount from a host where container is running, 00:03:00.311 # so fetch its hostname to easily identify the target swarm worker. 00:03:00.311 container="$(< /etc/hostname) ($agent)" 00:03:00.311 else 00:03:00.311 # Fallback 00:03:00.311 container=$agent 00:03:00.311 fi 00:03:00.311 fi 00:03:00.311 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:00.311 ' 00:03:00.580 [Pipeline] } 00:03:00.596 [Pipeline] // withEnv 00:03:00.605 [Pipeline] setCustomBuildProperty 00:03:00.621 [Pipeline] stage 00:03:00.624 [Pipeline] { (Tests) 00:03:00.641 [Pipeline] sh 00:03:00.922 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:00.936 [Pipeline] sh 00:03:01.213 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:01.228 [Pipeline] timeout 00:03:01.229 Timeout set to expire in 50 min 00:03:01.231 [Pipeline] { 00:03:01.247 [Pipeline] sh 00:03:01.526 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:03:01.784 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:03:01.796 [Pipeline] sh 00:03:02.074 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:03:02.345 [Pipeline] sh 00:03:02.623 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:02.638 [Pipeline] sh 00:03:02.929 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:02.929 ++ readlink -f spdk_repo 00:03:02.929 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:02.929 + [[ -n /home/vagrant/spdk_repo ]] 00:03:02.929 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:02.929 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:02.929 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:02.929 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:02.929 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:02.929 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:02.929 + cd /home/vagrant/spdk_repo 00:03:02.929 + source /etc/os-release 00:03:02.929 ++ NAME='Fedora Linux' 00:03:02.929 ++ VERSION='39 (Cloud Edition)' 00:03:02.929 ++ ID=fedora 00:03:02.929 ++ VERSION_ID=39 00:03:02.929 ++ VERSION_CODENAME= 00:03:02.929 ++ PLATFORM_ID=platform:f39 00:03:02.929 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:02.929 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:02.929 ++ LOGO=fedora-logo-icon 00:03:02.929 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:02.929 ++ HOME_URL=https://fedoraproject.org/ 00:03:02.929 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:02.929 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:02.929 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:02.929 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:02.929 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:02.929 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:02.929 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:02.929 ++ SUPPORT_END=2024-11-12 00:03:02.929 ++ VARIANT='Cloud Edition' 00:03:02.929 ++ VARIANT_ID=cloud 00:03:02.929 + uname -a 00:03:02.929 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:02.929 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:03.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:03.459 Hugepages 00:03:03.459 node hugesize free / total 00:03:03.459 node0 1048576kB 0 / 0 00:03:03.459 node0 2048kB 0 / 0 00:03:03.459 00:03:03.459 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.459 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:03.459 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:03.718 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:03.718 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:03.718 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:03.718 + rm -f /tmp/spdk-ld-path 00:03:03.718 + source autorun-spdk.conf 00:03:03.718 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.718 ++ SPDK_TEST_NVME=1 00:03:03.718 ++ SPDK_TEST_FTL=1 00:03:03.718 ++ SPDK_TEST_ISAL=1 00:03:03.718 ++ SPDK_RUN_ASAN=1 00:03:03.718 ++ SPDK_RUN_UBSAN=1 00:03:03.718 ++ SPDK_TEST_XNVME=1 00:03:03.718 ++ SPDK_TEST_NVME_FDP=1 00:03:03.718 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.718 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.718 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.718 ++ RUN_NIGHTLY=1 00:03:03.718 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:03.718 + [[ -n '' ]] 00:03:03.718 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:03.718 + for M in /var/spdk/build-*-manifest.txt 00:03:03.718 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:03.718 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.718 + for M in /var/spdk/build-*-manifest.txt 00:03:03.718 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:03.718 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.718 + for M in /var/spdk/build-*-manifest.txt 00:03:03.718 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:03.718 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.718 ++ uname 00:03:03.718 + [[ Linux == \L\i\n\u\x ]] 00:03:03.718 + sudo dmesg -T 00:03:03.718 + sudo dmesg --clear 00:03:03.718 + dmesg_pid=5752 00:03:03.718 + [[ Fedora Linux == FreeBSD ]] 00:03:03.718 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.718 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.718 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:03.718 + [[ -x /usr/src/fio-static/fio ]] 00:03:03.718 + sudo dmesg -Tw 00:03:03.718 + export FIO_BIN=/usr/src/fio-static/fio 00:03:03.718 + FIO_BIN=/usr/src/fio-static/fio 00:03:03.718 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:03.718 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:03.718 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:03.718 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.718 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.718 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:03.718 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.718 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.718 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.718 Test configuration: 00:03:03.718 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.718 SPDK_TEST_NVME=1 00:03:03.718 SPDK_TEST_FTL=1 00:03:03.718 SPDK_TEST_ISAL=1 00:03:03.718 SPDK_RUN_ASAN=1 00:03:03.718 SPDK_RUN_UBSAN=1 00:03:03.718 SPDK_TEST_XNVME=1 00:03:03.718 SPDK_TEST_NVME_FDP=1 00:03:03.718 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:03.718 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:03.718 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.718 RUN_NIGHTLY=1 11:34:17 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:03.718 11:34:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.718 11:34:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:03.718 11:34:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:03.718 11:34:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.718 11:34:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.718 11:34:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.718 11:34:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.718 11:34:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.718 11:34:17 -- paths/export.sh@5 -- $ export PATH 00:03:03.719 11:34:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.719 11:34:17 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.719 11:34:17 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:03.719 11:34:17 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732016057.XXXXXX 00:03:03.719 11:34:17 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732016057.EgqG25 00:03:03.719 11:34:17 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:03.719 11:34:17 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:03:03.719 11:34:17 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:03.719 11:34:17 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:03.719 11:34:17 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:03.719 11:34:17 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:03.719 11:34:17 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:03.719 11:34:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:03.719 11:34:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.719 11:34:17 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:03.719 11:34:17 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:03.719 11:34:17 -- pm/common@17 -- $ local monitor 00:03:03.719 11:34:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.719 11:34:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.719 11:34:17 -- pm/common@25 -- $ sleep 1 00:03:03.719 11:34:17 -- pm/common@21 -- $ date +%s 00:03:03.719 11:34:17 -- pm/common@21 -- $ date +%s 00:03:03.719 11:34:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732016057 00:03:03.719 11:34:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732016057 00:03:03.977 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732016057_collect-vmstat.pm.log 00:03:03.977 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732016057_collect-cpu-load.pm.log 00:03:04.911 11:34:18 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:04.911 11:34:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:04.911 11:34:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:04.911 11:34:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:04.911 11:34:18 -- spdk/autobuild.sh@16 -- $ date -u 00:03:04.911 Tue Nov 19 11:34:18 AM UTC 2024 00:03:04.911 11:34:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:04.911 v24.09-1-gb18e1bd62 00:03:04.911 11:34:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:04.911 11:34:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:04.911 11:34:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:04.911 11:34:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:04.911 11:34:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.911 ************************************ 00:03:04.911 START TEST asan 00:03:04.911 ************************************ 00:03:04.911 using asan 00:03:04.911 11:34:18 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:04.911 00:03:04.911 real 0m0.000s 00:03:04.911 user 0m0.000s 00:03:04.911 sys 0m0.000s 00:03:04.911 11:34:18 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:04.911 ************************************ 00:03:04.911 11:34:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.911 END TEST asan 00:03:04.911 ************************************ 00:03:04.911 11:34:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:04.911 11:34:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:04.911 11:34:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:04.911 11:34:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:04.911 11:34:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.911 ************************************ 00:03:04.911 START TEST ubsan 00:03:04.911 ************************************ 00:03:04.911 using ubsan 00:03:04.911 11:34:18 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:04.911 00:03:04.911 real 0m0.000s 00:03:04.911 user 0m0.000s 00:03:04.911 sys 0m0.000s 00:03:04.911 11:34:18 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:04.911 11:34:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.911 ************************************ 00:03:04.911 END TEST ubsan 00:03:04.911 ************************************ 00:03:04.911 11:34:18 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:04.911 11:34:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:04.911 11:34:18 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:04.911 11:34:18 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:04.911 11:34:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:04.911 11:34:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.911 ************************************ 00:03:04.911 START TEST build_native_dpdk 00:03:04.911 ************************************ 00:03:04.911 11:34:18 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:04.911 caf0f5d395 version: 22.11.4 00:03:04.911 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:04.911 dc9c799c7d vhost: fix missing spinlock unlock 00:03:04.911 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:04.911 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:04.911 11:34:18 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:04.912 patching file config/rte_config.h 00:03:04.912 Hunk #1 succeeded at 60 (offset 1 line). 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:04.912 patching file lib/pcapng/rte_pcapng.c 00:03:04.912 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:04.912 11:34:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:04.912 11:34:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:09.107 The Meson build system 00:03:09.107 Version: 1.5.0 00:03:09.107 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:09.107 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:09.107 Build type: native build 00:03:09.107 Program cat found: YES (/usr/bin/cat) 00:03:09.107 Project name: DPDK 00:03:09.107 Project version: 22.11.4 00:03:09.107 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:09.107 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:09.107 Host machine cpu family: x86_64 00:03:09.107 Host machine cpu: x86_64 00:03:09.107 Message: ## Building in Developer Mode ## 00:03:09.107 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:09.107 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:09.107 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:09.107 Program objdump found: YES (/usr/bin/objdump) 00:03:09.107 Program python3 found: YES (/usr/bin/python3) 00:03:09.107 Program cat found: YES (/usr/bin/cat) 00:03:09.107 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:09.107 Checking for size of "void *" : 8 00:03:09.107 Checking for size of "void *" : 8 (cached) 00:03:09.107 Library m found: YES 00:03:09.107 Library numa found: YES 00:03:09.107 Has header "numaif.h" : YES 00:03:09.107 Library fdt found: NO 00:03:09.107 Library execinfo found: NO 00:03:09.107 Has header "execinfo.h" : YES 00:03:09.107 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:09.107 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:09.107 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:09.107 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:09.107 Run-time dependency openssl found: YES 3.1.1 00:03:09.107 Run-time dependency libpcap found: YES 1.10.4 00:03:09.107 Has header "pcap.h" with dependency libpcap: YES 00:03:09.107 Compiler for C supports arguments -Wcast-qual: YES 00:03:09.107 Compiler for C supports arguments -Wdeprecated: YES 00:03:09.107 Compiler for C supports arguments -Wformat: YES 00:03:09.107 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:09.107 Compiler for C supports arguments -Wformat-security: NO 00:03:09.107 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:09.107 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:09.107 Compiler for C supports arguments -Wnested-externs: YES 00:03:09.107 Compiler for C supports arguments -Wold-style-definition: YES 00:03:09.107 Compiler for C supports arguments -Wpointer-arith: YES 00:03:09.107 Compiler for C supports arguments -Wsign-compare: YES 00:03:09.107 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:09.107 Compiler for C supports arguments -Wundef: YES 00:03:09.107 Compiler for C supports arguments -Wwrite-strings: YES 00:03:09.107 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:09.107 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:09.107 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:09.107 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:09.107 Compiler for C supports arguments -mavx512f: YES 00:03:09.107 Checking if "AVX512 checking" compiles: YES 00:03:09.107 Fetching value of define "__SSE4_2__" : 1 00:03:09.107 Fetching value of define "__AES__" : 1 00:03:09.107 Fetching value of define "__AVX__" : 1 00:03:09.107 Fetching value of define "__AVX2__" : 1 00:03:09.107 Fetching value of define "__AVX512BW__" : 1 00:03:09.107 Fetching value of define "__AVX512CD__" : 1 00:03:09.107 Fetching value of define "__AVX512DQ__" : 1 00:03:09.107 Fetching value of define "__AVX512F__" : 1 00:03:09.107 Fetching value of define "__AVX512VL__" : 1 00:03:09.107 Fetching value of define "__PCLMUL__" : 1 00:03:09.107 Fetching value of define "__RDRND__" : 1 00:03:09.107 Fetching value of define "__RDSEED__" : 1 00:03:09.107 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:09.107 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:09.107 Message: lib/kvargs: Defining dependency "kvargs" 00:03:09.107 Message: lib/telemetry: Defining dependency "telemetry" 00:03:09.107 Checking for function "getentropy" : YES 00:03:09.107 Message: lib/eal: Defining dependency "eal" 00:03:09.107 Message: lib/ring: Defining dependency "ring" 00:03:09.107 Message: lib/rcu: Defining dependency "rcu" 00:03:09.107 Message: lib/mempool: Defining dependency "mempool" 00:03:09.107 Message: lib/mbuf: Defining dependency "mbuf" 00:03:09.107 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:09.107 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.107 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.107 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.107 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.107 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:09.107 Compiler for C supports arguments -mpclmul: YES 00:03:09.107 Compiler for C supports arguments -maes: YES 00:03:09.107 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.108 Compiler for C supports arguments -mavx512bw: YES 00:03:09.108 Compiler for C supports arguments -mavx512dq: YES 00:03:09.108 Compiler for C supports arguments -mavx512vl: YES 00:03:09.108 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:09.108 Compiler for C supports arguments -mavx2: YES 00:03:09.108 Compiler for C supports arguments -mavx: YES 00:03:09.108 Message: lib/net: Defining dependency "net" 00:03:09.108 Message: lib/meter: Defining dependency "meter" 00:03:09.108 Message: lib/ethdev: Defining dependency "ethdev" 00:03:09.108 Message: lib/pci: Defining dependency "pci" 00:03:09.108 Message: lib/cmdline: Defining dependency "cmdline" 00:03:09.108 Message: lib/metrics: Defining dependency "metrics" 00:03:09.108 Message: lib/hash: Defining dependency "hash" 00:03:09.108 Message: lib/timer: Defining dependency "timer" 00:03:09.108 Fetching value of define "__AVX2__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.108 Message: lib/acl: Defining dependency "acl" 00:03:09.108 Message: lib/bbdev: Defining dependency "bbdev" 00:03:09.108 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:09.108 Run-time dependency libelf found: YES 0.191 00:03:09.108 Message: lib/bpf: Defining dependency "bpf" 00:03:09.108 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:09.108 Message: lib/compressdev: Defining dependency "compressdev" 00:03:09.108 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:09.108 Message: lib/distributor: Defining dependency "distributor" 00:03:09.108 Message: lib/efd: Defining dependency "efd" 00:03:09.108 Message: lib/eventdev: Defining dependency "eventdev" 00:03:09.108 Message: lib/gpudev: Defining dependency "gpudev" 00:03:09.108 Message: lib/gro: Defining dependency "gro" 00:03:09.108 Message: lib/gso: Defining dependency "gso" 00:03:09.108 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:09.108 Message: lib/jobstats: Defining dependency "jobstats" 00:03:09.108 Message: lib/latencystats: Defining dependency "latencystats" 00:03:09.108 Message: lib/lpm: Defining dependency "lpm" 00:03:09.108 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512IFMA__" : 1 00:03:09.108 Message: lib/member: Defining dependency "member" 00:03:09.108 Message: lib/pcapng: Defining dependency "pcapng" 00:03:09.108 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:09.108 Message: lib/power: Defining dependency "power" 00:03:09.108 Message: lib/rawdev: Defining dependency "rawdev" 00:03:09.108 Message: lib/regexdev: Defining dependency "regexdev" 00:03:09.108 Message: lib/dmadev: Defining dependency "dmadev" 00:03:09.108 Message: lib/rib: Defining dependency "rib" 00:03:09.108 Message: lib/reorder: Defining dependency "reorder" 00:03:09.108 Message: lib/sched: Defining dependency "sched" 00:03:09.108 Message: lib/security: Defining dependency "security" 00:03:09.108 Message: lib/stack: Defining dependency "stack" 00:03:09.108 Has header "linux/userfaultfd.h" : YES 00:03:09.108 Message: lib/vhost: Defining dependency "vhost" 00:03:09.108 Message: lib/ipsec: Defining dependency "ipsec" 00:03:09.108 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:09.108 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.108 Message: lib/fib: Defining dependency "fib" 00:03:09.108 Message: lib/port: Defining dependency "port" 00:03:09.108 Message: lib/pdump: Defining dependency "pdump" 00:03:09.108 Message: lib/table: Defining dependency "table" 00:03:09.108 Message: lib/pipeline: Defining dependency "pipeline" 00:03:09.108 Message: lib/graph: Defining dependency "graph" 00:03:09.108 Message: lib/node: Defining dependency "node" 00:03:09.108 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:09.108 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:09.108 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:09.108 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:09.108 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:09.108 Compiler for C supports arguments -Wno-unused-value: YES 00:03:09.108 Compiler for C supports arguments -Wno-format: YES 00:03:09.108 Compiler for C supports arguments -Wno-format-security: YES 00:03:09.108 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:09.108 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:09.108 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:09.108 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:10.043 Fetching value of define "__AVX2__" : 1 (cached) 00:03:10.043 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:10.043 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:10.043 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:10.043 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:10.043 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:10.043 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:10.043 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:10.043 Configuring doxy-api.conf using configuration 00:03:10.043 Program sphinx-build found: NO 00:03:10.043 Configuring rte_build_config.h using configuration 00:03:10.043 Message: 00:03:10.043 ================= 00:03:10.043 Applications Enabled 00:03:10.043 ================= 00:03:10.043 00:03:10.043 apps: 00:03:10.043 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:10.043 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:10.043 test-security-perf, 00:03:10.043 00:03:10.043 Message: 00:03:10.043 ================= 00:03:10.043 Libraries Enabled 00:03:10.043 ================= 00:03:10.043 00:03:10.043 libs: 00:03:10.043 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:10.043 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:10.043 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:10.043 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:10.043 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:10.043 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:10.043 table, pipeline, graph, node, 00:03:10.043 00:03:10.043 Message: 00:03:10.043 =============== 00:03:10.043 Drivers Enabled 00:03:10.043 =============== 00:03:10.043 00:03:10.043 common: 00:03:10.043 00:03:10.043 bus: 00:03:10.043 pci, vdev, 00:03:10.043 mempool: 00:03:10.043 ring, 00:03:10.043 dma: 00:03:10.043 00:03:10.043 net: 00:03:10.043 i40e, 00:03:10.043 raw: 00:03:10.043 00:03:10.043 crypto: 00:03:10.043 00:03:10.043 compress: 00:03:10.043 00:03:10.043 regex: 00:03:10.043 00:03:10.043 vdpa: 00:03:10.043 00:03:10.044 event: 00:03:10.044 00:03:10.044 baseband: 00:03:10.044 00:03:10.044 gpu: 00:03:10.044 00:03:10.044 00:03:10.044 Message: 00:03:10.044 ================= 00:03:10.044 Content Skipped 00:03:10.044 ================= 00:03:10.044 00:03:10.044 apps: 00:03:10.044 00:03:10.044 libs: 00:03:10.044 kni: explicitly disabled via build config (deprecated lib) 00:03:10.044 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:10.044 00:03:10.044 drivers: 00:03:10.044 common/cpt: not in enabled drivers build config 00:03:10.044 common/dpaax: not in enabled drivers build config 00:03:10.044 common/iavf: not in enabled drivers build config 00:03:10.044 common/idpf: not in enabled drivers build config 00:03:10.044 common/mvep: not in enabled drivers build config 00:03:10.044 common/octeontx: not in enabled drivers build config 00:03:10.044 bus/auxiliary: not in enabled drivers build config 00:03:10.044 bus/dpaa: not in enabled drivers build config 00:03:10.044 bus/fslmc: not in enabled drivers build config 00:03:10.044 bus/ifpga: not in enabled drivers build config 00:03:10.044 bus/vmbus: not in enabled drivers build config 00:03:10.044 common/cnxk: not in enabled drivers build config 00:03:10.044 common/mlx5: not in enabled drivers build config 00:03:10.044 common/qat: not in enabled drivers build config 00:03:10.044 common/sfc_efx: not in enabled drivers build config 00:03:10.044 mempool/bucket: not in enabled drivers build config 00:03:10.044 mempool/cnxk: not in enabled drivers build config 00:03:10.044 mempool/dpaa: not in enabled drivers build config 00:03:10.044 mempool/dpaa2: not in enabled drivers build config 00:03:10.044 mempool/octeontx: not in enabled drivers build config 00:03:10.044 mempool/stack: not in enabled drivers build config 00:03:10.044 dma/cnxk: not in enabled drivers build config 00:03:10.044 dma/dpaa: not in enabled drivers build config 00:03:10.044 dma/dpaa2: not in enabled drivers build config 00:03:10.044 dma/hisilicon: not in enabled drivers build config 00:03:10.044 dma/idxd: not in enabled drivers build config 00:03:10.044 dma/ioat: not in enabled drivers build config 00:03:10.044 dma/skeleton: not in enabled drivers build config 00:03:10.044 net/af_packet: not in enabled drivers build config 00:03:10.044 net/af_xdp: not in enabled drivers build config 00:03:10.044 net/ark: not in enabled drivers build config 00:03:10.044 net/atlantic: not in enabled drivers build config 00:03:10.044 net/avp: not in enabled drivers build config 00:03:10.044 net/axgbe: not in enabled drivers build config 00:03:10.044 net/bnx2x: not in enabled drivers build config 00:03:10.044 net/bnxt: not in enabled drivers build config 00:03:10.044 net/bonding: not in enabled drivers build config 00:03:10.044 net/cnxk: not in enabled drivers build config 00:03:10.044 net/cxgbe: not in enabled drivers build config 00:03:10.044 net/dpaa: not in enabled drivers build config 00:03:10.044 net/dpaa2: not in enabled drivers build config 00:03:10.044 net/e1000: not in enabled drivers build config 00:03:10.044 net/ena: not in enabled drivers build config 00:03:10.044 net/enetc: not in enabled drivers build config 00:03:10.044 net/enetfec: not in enabled drivers build config 00:03:10.044 net/enic: not in enabled drivers build config 00:03:10.044 net/failsafe: not in enabled drivers build config 00:03:10.044 net/fm10k: not in enabled drivers build config 00:03:10.044 net/gve: not in enabled drivers build config 00:03:10.044 net/hinic: not in enabled drivers build config 00:03:10.044 net/hns3: not in enabled drivers build config 00:03:10.044 net/iavf: not in enabled drivers build config 00:03:10.044 net/ice: not in enabled drivers build config 00:03:10.044 net/idpf: not in enabled drivers build config 00:03:10.044 net/igc: not in enabled drivers build config 00:03:10.044 net/ionic: not in enabled drivers build config 00:03:10.044 net/ipn3ke: not in enabled drivers build config 00:03:10.044 net/ixgbe: not in enabled drivers build config 00:03:10.044 net/kni: not in enabled drivers build config 00:03:10.044 net/liquidio: not in enabled drivers build config 00:03:10.044 net/mana: not in enabled drivers build config 00:03:10.044 net/memif: not in enabled drivers build config 00:03:10.044 net/mlx4: not in enabled drivers build config 00:03:10.044 net/mlx5: not in enabled drivers build config 00:03:10.044 net/mvneta: not in enabled drivers build config 00:03:10.044 net/mvpp2: not in enabled drivers build config 00:03:10.044 net/netvsc: not in enabled drivers build config 00:03:10.044 net/nfb: not in enabled drivers build config 00:03:10.044 net/nfp: not in enabled drivers build config 00:03:10.044 net/ngbe: not in enabled drivers build config 00:03:10.044 net/null: not in enabled drivers build config 00:03:10.044 net/octeontx: not in enabled drivers build config 00:03:10.044 net/octeon_ep: not in enabled drivers build config 00:03:10.044 net/pcap: not in enabled drivers build config 00:03:10.044 net/pfe: not in enabled drivers build config 00:03:10.044 net/qede: not in enabled drivers build config 00:03:10.044 net/ring: not in enabled drivers build config 00:03:10.044 net/sfc: not in enabled drivers build config 00:03:10.044 net/softnic: not in enabled drivers build config 00:03:10.044 net/tap: not in enabled drivers build config 00:03:10.044 net/thunderx: not in enabled drivers build config 00:03:10.044 net/txgbe: not in enabled drivers build config 00:03:10.044 net/vdev_netvsc: not in enabled drivers build config 00:03:10.044 net/vhost: not in enabled drivers build config 00:03:10.044 net/virtio: not in enabled drivers build config 00:03:10.044 net/vmxnet3: not in enabled drivers build config 00:03:10.044 raw/cnxk_bphy: not in enabled drivers build config 00:03:10.044 raw/cnxk_gpio: not in enabled drivers build config 00:03:10.044 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:10.044 raw/ifpga: not in enabled drivers build config 00:03:10.044 raw/ntb: not in enabled drivers build config 00:03:10.044 raw/skeleton: not in enabled drivers build config 00:03:10.044 crypto/armv8: not in enabled drivers build config 00:03:10.044 crypto/bcmfs: not in enabled drivers build config 00:03:10.044 crypto/caam_jr: not in enabled drivers build config 00:03:10.044 crypto/ccp: not in enabled drivers build config 00:03:10.044 crypto/cnxk: not in enabled drivers build config 00:03:10.044 crypto/dpaa_sec: not in enabled drivers build config 00:03:10.044 crypto/dpaa2_sec: not in enabled drivers build config 00:03:10.044 crypto/ipsec_mb: not in enabled drivers build config 00:03:10.044 crypto/mlx5: not in enabled drivers build config 00:03:10.044 crypto/mvsam: not in enabled drivers build config 00:03:10.044 crypto/nitrox: not in enabled drivers build config 00:03:10.044 crypto/null: not in enabled drivers build config 00:03:10.044 crypto/octeontx: not in enabled drivers build config 00:03:10.044 crypto/openssl: not in enabled drivers build config 00:03:10.044 crypto/scheduler: not in enabled drivers build config 00:03:10.044 crypto/uadk: not in enabled drivers build config 00:03:10.044 crypto/virtio: not in enabled drivers build config 00:03:10.044 compress/isal: not in enabled drivers build config 00:03:10.044 compress/mlx5: not in enabled drivers build config 00:03:10.044 compress/octeontx: not in enabled drivers build config 00:03:10.044 compress/zlib: not in enabled drivers build config 00:03:10.044 regex/mlx5: not in enabled drivers build config 00:03:10.044 regex/cn9k: not in enabled drivers build config 00:03:10.044 vdpa/ifc: not in enabled drivers build config 00:03:10.044 vdpa/mlx5: not in enabled drivers build config 00:03:10.044 vdpa/sfc: not in enabled drivers build config 00:03:10.044 event/cnxk: not in enabled drivers build config 00:03:10.044 event/dlb2: not in enabled drivers build config 00:03:10.044 event/dpaa: not in enabled drivers build config 00:03:10.044 event/dpaa2: not in enabled drivers build config 00:03:10.044 event/dsw: not in enabled drivers build config 00:03:10.044 event/opdl: not in enabled drivers build config 00:03:10.044 event/skeleton: not in enabled drivers build config 00:03:10.044 event/sw: not in enabled drivers build config 00:03:10.044 event/octeontx: not in enabled drivers build config 00:03:10.044 baseband/acc: not in enabled drivers build config 00:03:10.044 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:10.044 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:10.044 baseband/la12xx: not in enabled drivers build config 00:03:10.044 baseband/null: not in enabled drivers build config 00:03:10.044 baseband/turbo_sw: not in enabled drivers build config 00:03:10.044 gpu/cuda: not in enabled drivers build config 00:03:10.044 00:03:10.044 00:03:10.044 Build targets in project: 309 00:03:10.044 00:03:10.044 DPDK 22.11.4 00:03:10.044 00:03:10.044 User defined options 00:03:10.044 libdir : lib 00:03:10.044 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:10.044 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:10.044 c_link_args : 00:03:10.044 enable_docs : false 00:03:10.044 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:10.044 enable_kmods : false 00:03:10.044 machine : native 00:03:10.044 tests : false 00:03:10.044 00:03:10.044 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.044 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:10.044 11:34:23 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:10.044 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.303 [1/738] Generating lib/rte_kvargs_def with a custom command 00:03:10.303 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:10.303 [3/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:10.303 [4/738] Generating lib/rte_telemetry_def with a custom command 00:03:10.303 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:10.303 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:10.303 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:10.303 [8/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:10.303 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.303 [10/738] Linking static target lib/librte_kvargs.a 00:03:10.303 [11/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.303 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:10.303 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:10.303 [14/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.303 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.303 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.561 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.561 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.561 [19/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.561 [20/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.561 [21/738] Linking target lib/librte_kvargs.so.23.0 00:03:10.561 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.561 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:10.561 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.561 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.561 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:10.561 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.561 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.561 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.561 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.819 [31/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.819 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.819 [33/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.819 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.819 [35/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.819 [36/738] Linking static target lib/librte_telemetry.a 00:03:10.819 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.820 [38/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:10.820 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.820 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.820 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:11.077 [42/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.077 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:11.077 [44/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.077 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:11.077 [46/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.077 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.077 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.077 [49/738] Linking target lib/librte_telemetry.so.23.0 00:03:11.077 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.077 [51/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.078 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.078 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.078 [54/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.078 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.078 [56/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:11.078 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.078 [58/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.078 [59/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:11.078 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.336 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.336 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.336 [63/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.336 [64/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.336 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:11.336 [66/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.336 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:11.336 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:11.336 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.336 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:11.336 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:11.336 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:11.336 [73/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:11.336 [74/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:11.336 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:11.336 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:11.336 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:11.336 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:11.336 [79/738] Generating lib/rte_eal_def with a custom command 00:03:11.336 [80/738] Generating lib/rte_eal_mingw with a custom command 00:03:11.336 [81/738] Generating lib/rte_ring_def with a custom command 00:03:11.336 [82/738] Generating lib/rte_ring_mingw with a custom command 00:03:11.594 [83/738] Generating lib/rte_rcu_def with a custom command 00:03:11.594 [84/738] Generating lib/rte_rcu_mingw with a custom command 00:03:11.594 [85/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:11.594 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:11.594 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:11.594 [88/738] Linking static target lib/librte_ring.a 00:03:11.594 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:11.594 [90/738] Generating lib/rte_mempool_def with a custom command 00:03:11.594 [91/738] Generating lib/rte_mempool_mingw with a custom command 00:03:11.594 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:11.852 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:11.852 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.852 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:11.852 [96/738] Generating lib/rte_mbuf_def with a custom command 00:03:11.852 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:11.852 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:11.852 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:11.852 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:11.852 [101/738] Linking static target lib/librte_eal.a 00:03:11.852 [102/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.110 [103/738] Linking static target lib/librte_rcu.a 00:03:12.110 [104/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:12.110 [105/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.110 [106/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.110 [107/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.110 [108/738] Linking static target lib/librte_mempool.a 00:03:12.110 [109/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.368 [110/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.368 [111/738] Generating lib/rte_net_def with a custom command 00:03:12.368 [112/738] Generating lib/rte_net_mingw with a custom command 00:03:12.368 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:12.368 [114/738] Generating lib/rte_meter_def with a custom command 00:03:12.368 [115/738] Generating lib/rte_meter_mingw with a custom command 00:03:12.368 [116/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.368 [117/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:12.368 [118/738] Linking static target lib/librte_meter.a 00:03:12.368 [119/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:12.368 [120/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:12.626 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.626 [122/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:12.626 [123/738] Linking static target lib/librte_net.a 00:03:12.626 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:12.626 [125/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:12.626 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:12.626 [127/738] Linking static target lib/librte_mbuf.a 00:03:12.626 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.884 [129/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.884 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:12.884 [131/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:12.884 [132/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.142 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:13.142 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:13.142 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.142 [136/738] Generating lib/rte_ethdev_def with a custom command 00:03:13.142 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:13.142 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:13.142 [139/738] Generating lib/rte_pci_def with a custom command 00:03:13.142 [140/738] Generating lib/rte_pci_mingw with a custom command 00:03:13.142 [141/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:13.142 [142/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:13.142 [143/738] Linking static target lib/librte_pci.a 00:03:13.425 [144/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:13.425 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:13.425 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.425 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.425 [148/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.425 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.425 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.425 [151/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:13.425 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.425 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.425 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:13.425 [155/738] Generating lib/rte_cmdline_def with a custom command 00:03:13.425 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:13.425 [157/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.684 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:13.684 [159/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:13.684 [160/738] Generating lib/rte_metrics_def with a custom command 00:03:13.684 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:03:13.684 [162/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:13.684 [163/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:13.684 [164/738] Generating lib/rte_hash_def with a custom command 00:03:13.684 [165/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:13.684 [166/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:13.684 [167/738] Linking static target lib/librte_cmdline.a 00:03:13.684 [168/738] Generating lib/rte_hash_mingw with a custom command 00:03:13.684 [169/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:13.684 [170/738] Generating lib/rte_timer_mingw with a custom command 00:03:13.684 [171/738] Generating lib/rte_timer_def with a custom command 00:03:13.942 [172/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:13.942 [173/738] Linking static target lib/librte_metrics.a 00:03:13.942 [174/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:13.942 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:13.942 [176/738] Linking static target lib/librte_timer.a 00:03:14.203 [177/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.203 [178/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.203 [179/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.203 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.203 [181/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:14.461 [182/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:14.461 [183/738] Generating lib/rte_acl_def with a custom command 00:03:14.461 [184/738] Generating lib/rte_acl_mingw with a custom command 00:03:14.461 [185/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:14.461 [186/738] Generating lib/rte_bbdev_def with a custom command 00:03:14.461 [187/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:14.461 [188/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:14.461 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:03:14.461 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:14.461 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.461 [192/738] Linking static target lib/librte_ethdev.a 00:03:14.719 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:14.719 [194/738] Linking static target lib/librte_bitratestats.a 00:03:14.719 [195/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:14.719 [196/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.977 [197/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:14.977 [198/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:14.977 [199/738] Linking static target lib/librte_bbdev.a 00:03:14.977 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:15.235 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:15.235 [202/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.235 [203/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:15.494 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:15.494 [205/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:15.494 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:15.494 [207/738] Generating lib/rte_bpf_def with a custom command 00:03:15.494 [208/738] Generating lib/rte_bpf_mingw with a custom command 00:03:15.494 [209/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:15.494 [210/738] Linking static target lib/librte_hash.a 00:03:15.752 [211/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:15.752 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:03:15.752 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:15.752 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:15.752 [215/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:15.752 [216/738] Linking static target lib/librte_cfgfile.a 00:03:16.011 [217/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.011 [218/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.011 [219/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.011 [220/738] Generating lib/rte_compressdev_def with a custom command 00:03:16.011 [221/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:16.011 [222/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:16.011 [223/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.011 [224/738] Generating lib/rte_cryptodev_def with a custom command 00:03:16.011 [225/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:16.269 [226/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:16.269 [227/738] Linking static target lib/librte_bpf.a 00:03:16.269 [228/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:16.269 [229/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.269 [230/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:16.269 [231/738] Linking static target lib/librte_acl.a 00:03:16.269 [232/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.269 [233/738] Linking static target lib/librte_compressdev.a 00:03:16.269 [234/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.269 [235/738] Generating lib/rte_distributor_def with a custom command 00:03:16.527 [236/738] Generating lib/rte_distributor_mingw with a custom command 00:03:16.527 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:16.527 [238/738] Generating lib/rte_efd_def with a custom command 00:03:16.527 [239/738] Generating lib/rte_efd_mingw with a custom command 00:03:16.527 [240/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.527 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:16.527 [242/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.527 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:16.527 [244/738] Linking target lib/librte_eal.so.23.0 00:03:16.785 [245/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:16.785 [246/738] Linking target lib/librte_ring.so.23.0 00:03:16.785 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:16.785 [248/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:16.785 [249/738] Linking target lib/librte_meter.so.23.0 00:03:16.785 [250/738] Linking target lib/librte_pci.so.23.0 00:03:16.785 [251/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:16.785 [252/738] Linking target lib/librte_rcu.so.23.0 00:03:16.785 [253/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:16.785 [254/738] Linking target lib/librte_mempool.so.23.0 00:03:16.785 [255/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:16.785 [256/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:16.785 [257/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:16.785 [258/738] Linking target lib/librte_timer.so.23.0 00:03:16.785 [259/738] Linking target lib/librte_cfgfile.so.23.0 00:03:17.043 [260/738] Linking static target lib/librte_distributor.a 00:03:17.043 [261/738] Linking target lib/librte_acl.so.23.0 00:03:17.043 [262/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.043 [263/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:17.043 [264/738] Linking target lib/librte_mbuf.so.23.0 00:03:17.043 [265/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:17.043 [266/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:17.043 [267/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.043 [268/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:17.043 [269/738] Linking target lib/librte_net.so.23.0 00:03:17.301 [270/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:17.301 [271/738] Linking target lib/librte_cmdline.so.23.0 00:03:17.301 [272/738] Linking target lib/librte_hash.so.23.0 00:03:17.301 [273/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:17.301 [274/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:17.301 [275/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:17.301 [276/738] Linking target lib/librte_bbdev.so.23.0 00:03:17.301 [277/738] Linking static target lib/librte_efd.a 00:03:17.301 [278/738] Linking target lib/librte_compressdev.so.23.0 00:03:17.301 [279/738] Linking target lib/librte_distributor.so.23.0 00:03:17.301 [280/738] Generating lib/rte_eventdev_def with a custom command 00:03:17.301 [281/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:17.301 [282/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:17.301 [283/738] Generating lib/rte_gpudev_def with a custom command 00:03:17.301 [284/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:17.560 [285/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.560 [286/738] Linking target lib/librte_efd.so.23.0 00:03:17.560 [287/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.818 [288/738] Linking target lib/librte_ethdev.so.23.0 00:03:17.818 [289/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:17.818 [290/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:17.818 [291/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:17.818 [292/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:17.818 [293/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:17.818 [294/738] Generating lib/rte_gro_def with a custom command 00:03:17.818 [295/738] Linking target lib/librte_metrics.so.23.0 00:03:17.818 [296/738] Generating lib/rte_gro_mingw with a custom command 00:03:17.818 [297/738] Linking target lib/librte_bpf.so.23.0 00:03:17.818 [298/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.818 [299/738] Linking static target lib/librte_cryptodev.a 00:03:17.818 [300/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:17.818 [301/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:17.818 [302/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:17.818 [303/738] Linking target lib/librte_bitratestats.so.23.0 00:03:17.818 [304/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:18.076 [305/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:18.076 [306/738] Linking static target lib/librte_gpudev.a 00:03:18.076 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:18.076 [308/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:18.076 [309/738] Generating lib/rte_gso_def with a custom command 00:03:18.076 [310/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:18.076 [311/738] Linking static target lib/librte_gro.a 00:03:18.335 [312/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:18.335 [313/738] Generating lib/rte_gso_mingw with a custom command 00:03:18.335 [314/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:18.335 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:18.335 [316/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:18.335 [317/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.335 [318/738] Linking target lib/librte_gro.so.23.0 00:03:18.335 [319/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:18.335 [320/738] Linking static target lib/librte_gso.a 00:03:18.335 [321/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.594 [322/738] Linking target lib/librte_gpudev.so.23.0 00:03:18.594 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:03:18.594 [324/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:18.594 [325/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.594 [326/738] Linking target lib/librte_gso.so.23.0 00:03:18.594 [327/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:18.594 [328/738] Generating lib/rte_jobstats_def with a custom command 00:03:18.595 [329/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:18.595 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:18.595 [331/738] Linking static target lib/librte_eventdev.a 00:03:18.595 [332/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:18.595 [333/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:18.595 [334/738] Generating lib/rte_latencystats_def with a custom command 00:03:18.595 [335/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:18.595 [336/738] Linking static target lib/librte_jobstats.a 00:03:18.595 [337/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:18.595 [338/738] Generating lib/rte_lpm_def with a custom command 00:03:18.595 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:03:18.595 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:18.853 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:18.853 [342/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.853 [343/738] Linking target lib/librte_jobstats.so.23.0 00:03:18.853 [344/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:18.853 [345/738] Linking static target lib/librte_ip_frag.a 00:03:18.853 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:18.853 [347/738] Linking static target lib/librte_latencystats.a 00:03:18.853 [348/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:19.112 [349/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.112 [350/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.112 [351/738] Linking target lib/librte_latencystats.so.23.0 00:03:19.112 [352/738] Linking target lib/librte_ip_frag.so.23.0 00:03:19.112 [353/738] Generating lib/rte_member_def with a custom command 00:03:19.112 [354/738] Generating lib/rte_member_mingw with a custom command 00:03:19.112 [355/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:19.112 [356/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:19.112 [357/738] Generating lib/rte_pcapng_def with a custom command 00:03:19.112 [358/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:19.371 [359/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.371 [360/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.371 [361/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:19.371 [362/738] Linking target lib/librte_cryptodev.so.23.0 00:03:19.371 [363/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.371 [364/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:19.371 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:19.371 [366/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:19.371 [367/738] Linking static target lib/librte_lpm.a 00:03:19.371 [368/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:19.371 [369/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.371 [370/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:19.630 [371/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:19.630 [372/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:19.630 [373/738] Generating lib/rte_power_def with a custom command 00:03:19.630 [374/738] Generating lib/rte_power_mingw with a custom command 00:03:19.630 [375/738] Generating lib/rte_rawdev_def with a custom command 00:03:19.630 [376/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:19.630 [377/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:19.630 [378/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.630 [379/738] Linking static target lib/librte_pcapng.a 00:03:19.630 [380/738] Linking target lib/librte_lpm.so.23.0 00:03:19.630 [381/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:19.630 [382/738] Generating lib/rte_regexdev_def with a custom command 00:03:19.630 [383/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:19.630 [384/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:19.630 [385/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:19.889 [386/738] Generating lib/rte_dmadev_def with a custom command 00:03:19.889 [387/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:19.889 [388/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:19.889 [389/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:19.889 [390/738] Generating lib/rte_rib_def with a custom command 00:03:19.889 [391/738] Linking static target lib/librte_power.a 00:03:19.889 [392/738] Generating lib/rte_rib_mingw with a custom command 00:03:19.889 [393/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.889 [394/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:19.889 [395/738] Linking static target lib/librte_rawdev.a 00:03:19.889 [396/738] Linking target lib/librte_pcapng.so.23.0 00:03:19.889 [397/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.889 [398/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:19.889 [399/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:19.889 [400/738] Linking static target lib/librte_regexdev.a 00:03:19.889 [401/738] Linking target lib/librte_eventdev.so.23.0 00:03:19.889 [402/738] Generating lib/rte_reorder_def with a custom command 00:03:20.148 [403/738] Generating lib/rte_reorder_mingw with a custom command 00:03:20.148 [404/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:20.148 [405/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:20.148 [406/738] Linking static target lib/librte_dmadev.a 00:03:20.148 [407/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:20.148 [408/738] Linking static target lib/librte_member.a 00:03:20.148 [409/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.148 [410/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:20.148 [411/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.148 [412/738] Linking target lib/librte_rawdev.so.23.0 00:03:20.148 [413/738] Generating lib/rte_sched_def with a custom command 00:03:20.148 [414/738] Generating lib/rte_sched_mingw with a custom command 00:03:20.148 [415/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:20.148 [416/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:20.407 [417/738] Generating lib/rte_security_def with a custom command 00:03:20.407 [418/738] Generating lib/rte_security_mingw with a custom command 00:03:20.407 [419/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:20.407 [420/738] Linking static target lib/librte_rib.a 00:03:20.407 [421/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.407 [422/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:20.407 [423/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:20.407 [424/738] Linking target lib/librte_member.so.23.0 00:03:20.407 [425/738] Generating lib/rte_stack_def with a custom command 00:03:20.407 [426/738] Generating lib/rte_stack_mingw with a custom command 00:03:20.407 [427/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:20.407 [428/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.407 [429/738] Linking static target lib/librte_stack.a 00:03:20.407 [430/738] Linking static target lib/librte_reorder.a 00:03:20.407 [431/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.407 [432/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.407 [433/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.407 [434/738] Linking target lib/librte_dmadev.so.23.0 00:03:20.666 [435/738] Linking target lib/librte_power.so.23.0 00:03:20.666 [436/738] Linking target lib/librte_regexdev.so.23.0 00:03:20.666 [437/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.666 [438/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.666 [439/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:20.666 [440/738] Linking target lib/librte_stack.so.23.0 00:03:20.666 [441/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.666 [442/738] Linking target lib/librte_reorder.so.23.0 00:03:20.666 [443/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.666 [444/738] Linking target lib/librte_rib.so.23.0 00:03:20.666 [445/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.666 [446/738] Linking static target lib/librte_security.a 00:03:20.666 [447/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:20.666 [448/738] Generating lib/rte_vhost_def with a custom command 00:03:20.925 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:03:20.925 [450/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.925 [451/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.925 [452/738] Linking target lib/librte_security.so.23.0 00:03:20.925 [453/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.925 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:20.925 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:21.184 [456/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:21.443 [457/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:21.443 [458/738] Generating lib/rte_ipsec_def with a custom command 00:03:21.443 [459/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:21.443 [460/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.443 [461/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:21.443 [462/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.443 [463/738] Linking static target lib/librte_sched.a 00:03:21.443 [464/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:21.443 [465/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:21.443 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:21.701 [467/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:21.701 [468/738] Generating lib/rte_fib_def with a custom command 00:03:21.701 [469/738] Generating lib/rte_fib_mingw with a custom command 00:03:21.701 [470/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.960 [471/738] Linking target lib/librte_sched.so.23.0 00:03:21.960 [472/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:21.960 [473/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:21.960 [474/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:21.960 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:21.960 [476/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:21.960 [477/738] Linking static target lib/librte_ipsec.a 00:03:22.238 [478/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.238 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:22.238 [480/738] Linking target lib/librte_ipsec.so.23.0 00:03:22.238 [481/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:22.238 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:22.503 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:22.503 [484/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:22.503 [485/738] Linking static target lib/librte_fib.a 00:03:22.503 [486/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:22.503 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:22.762 [488/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.762 [489/738] Linking target lib/librte_fib.so.23.0 00:03:22.762 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:22.762 [491/738] Generating lib/rte_port_def with a custom command 00:03:22.762 [492/738] Generating lib/rte_port_mingw with a custom command 00:03:22.762 [493/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:22.762 [494/738] Generating lib/rte_pdump_def with a custom command 00:03:22.762 [495/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:22.762 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:03:23.020 [497/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:23.020 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:23.020 [499/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:23.279 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:23.279 [501/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:23.279 [502/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:23.279 [503/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:23.279 [504/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:23.279 [505/738] Linking static target lib/librte_port.a 00:03:23.279 [506/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:23.538 [507/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:23.538 [508/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:23.538 [509/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:23.538 [510/738] Linking static target lib/librte_pdump.a 00:03:23.538 [511/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:23.538 [512/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:23.796 [513/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.796 [514/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.796 [515/738] Linking target lib/librte_port.so.23.0 00:03:23.796 [516/738] Linking target lib/librte_pdump.so.23.0 00:03:23.796 [517/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:23.796 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:23.796 [519/738] Generating lib/rte_table_def with a custom command 00:03:23.796 [520/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:23.796 [521/738] Generating lib/rte_table_mingw with a custom command 00:03:24.055 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:24.055 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:24.055 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:24.055 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:24.314 [526/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.314 [527/738] Generating lib/rte_pipeline_def with a custom command 00:03:24.314 [528/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:24.314 [529/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:24.314 [530/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:24.573 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:24.573 [532/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:24.573 [533/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:24.573 [534/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:24.573 [535/738] Generating lib/rte_graph_def with a custom command 00:03:24.573 [536/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:24.573 [537/738] Generating lib/rte_graph_mingw with a custom command 00:03:24.573 [538/738] Linking static target lib/librte_table.a 00:03:24.573 [539/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:24.832 [540/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:24.832 [541/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:24.832 [542/738] Linking static target lib/librte_graph.a 00:03:24.832 [543/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:25.091 [544/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.091 [545/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:25.091 [546/738] Linking target lib/librte_table.so.23.0 00:03:25.091 [547/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:25.091 [548/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:25.349 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:25.349 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:25.349 [551/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.349 [552/738] Linking target lib/librte_graph.so.23.0 00:03:25.349 [553/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:25.349 [554/738] Generating lib/rte_node_def with a custom command 00:03:25.349 [555/738] Generating lib/rte_node_mingw with a custom command 00:03:25.349 [556/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:25.609 [557/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:25.609 [558/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:25.609 [559/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:25.609 [560/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:25.609 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:25.609 [562/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:25.609 [563/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:25.868 [564/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:25.868 [565/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:25.868 [566/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:25.868 [567/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:25.868 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:25.868 [569/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:25.868 [570/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:25.868 [571/738] Linking static target lib/librte_node.a 00:03:25.868 [572/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:25.868 [573/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:25.868 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:25.868 [575/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:25.868 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:25.868 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:25.868 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:25.868 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.126 [580/738] Linking target lib/librte_node.so.23.0 00:03:26.126 [581/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.126 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.126 [583/738] Linking static target drivers/librte_bus_vdev.a 00:03:26.126 [584/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.126 [585/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.126 [586/738] Linking static target drivers/librte_bus_pci.a 00:03:26.126 [587/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.126 [588/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.126 [589/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.126 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:26.384 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:26.384 [592/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:26.384 [593/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:26.384 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:26.384 [595/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.385 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:26.643 [597/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.643 [598/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.643 [599/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:26.643 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:26.643 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:26.643 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.643 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:26.643 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:26.643 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:26.902 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:26.902 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:27.469 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:27.469 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:27.469 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:27.469 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:27.469 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:28.036 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:28.036 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:28.036 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:28.036 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:28.036 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:28.036 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:28.036 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:28.603 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:28.603 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:28.861 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:28.861 [623/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:29.119 [624/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:29.119 [625/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:29.119 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:29.119 [627/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:29.119 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:29.119 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:29.119 [630/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:29.686 [631/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:29.686 [632/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:29.686 [633/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:29.686 [634/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:29.686 [635/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:29.945 [636/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:29.945 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:29.945 [638/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:29.945 [639/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:29.945 [640/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:29.945 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:29.945 [642/738] Linking static target drivers/librte_net_i40e.a 00:03:29.945 [643/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:30.203 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:30.203 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:30.462 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:30.462 [647/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:30.462 [648/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.462 [649/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:30.462 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:30.462 [651/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:30.462 [652/738] Linking static target lib/librte_vhost.a 00:03:30.720 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:30.720 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:30.720 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:30.720 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:30.720 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:30.978 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:30.978 [659/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:30.978 [660/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:30.978 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:30.978 [662/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:30.978 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:31.237 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:31.237 [665/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.237 [666/738] Linking target lib/librte_vhost.so.23.0 00:03:31.495 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:31.495 [668/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:31.495 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:31.753 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:31.753 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:32.012 [672/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:32.012 [673/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:32.012 [674/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:32.012 [675/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:32.012 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:32.012 [677/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:32.269 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:32.269 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:32.269 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:32.269 [681/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:32.269 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:32.608 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:32.608 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:32.608 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:32.608 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:32.608 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:32.608 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:32.867 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:32.867 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:32.867 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:33.127 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:33.127 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:33.127 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:33.385 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:33.385 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:33.385 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:33.385 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:33.644 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:33.904 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:33.904 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:33.904 [702/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:33.904 [703/738] Linking static target lib/librte_pipeline.a 00:03:33.904 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:33.904 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:33.904 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:34.165 [707/738] Linking target app/dpdk-dumpcap 00:03:34.165 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:34.165 [709/738] Linking target app/dpdk-pdump 00:03:34.165 [710/738] Linking target app/dpdk-proc-info 00:03:34.424 [711/738] Linking target app/dpdk-test-acl 00:03:34.424 [712/738] Linking target app/dpdk-test-cmdline 00:03:34.424 [713/738] Linking target app/dpdk-test-bbdev 00:03:34.424 [714/738] Linking target app/dpdk-test-compress-perf 00:03:34.424 [715/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:34.684 [716/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:34.684 [717/738] Linking target app/dpdk-test-crypto-perf 00:03:34.684 [718/738] Linking target app/dpdk-test-fib 00:03:34.684 [719/738] Linking target app/dpdk-test-eventdev 00:03:34.684 [720/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:34.684 [721/738] Linking target app/dpdk-test-flow-perf 00:03:34.684 [722/738] Linking target app/dpdk-test-gpudev 00:03:34.945 [723/738] Linking target app/dpdk-test-pipeline 00:03:34.945 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:34.945 [725/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:35.205 [726/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:35.205 [727/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:35.205 [728/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:35.205 [729/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:35.205 [730/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:35.466 [731/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:35.466 [732/738] Linking target app/dpdk-test-sad 00:03:35.466 [733/738] Linking target app/dpdk-test-regex 00:03:35.725 [734/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:35.725 [735/738] Linking target app/dpdk-testpmd 00:03:35.984 [736/738] Linking target app/dpdk-test-security-perf 00:03:35.984 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.984 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:35.984 11:34:49 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:35.984 11:34:49 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:35.984 11:34:49 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:35.984 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:35.984 [0/1] Installing files. 00:03:36.245 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:36.245 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.246 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.247 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.248 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.249 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.250 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:36.250 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.250 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.512 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.513 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.513 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.513 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.513 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:36.513 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.513 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.514 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.515 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:36.516 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:36.516 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:36.516 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:36.516 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:36.516 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:36.516 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:36.516 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:36.516 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:36.516 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:36.516 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:36.516 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:36.516 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:36.516 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:36.516 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:36.516 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:36.516 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:36.516 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:36.516 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:36.516 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:36.516 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:36.516 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:36.516 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:36.516 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:36.516 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:36.516 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:36.516 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:36.516 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:36.516 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:36.516 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:36.516 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:36.516 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:36.516 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:36.516 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:36.516 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:36.516 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:36.516 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:36.516 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:36.516 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:36.516 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:36.516 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:36.516 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:36.516 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:36.516 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:36.516 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:36.517 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:36.517 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:36.517 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:36.517 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:36.517 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:36.517 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:36.517 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:36.517 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:36.517 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:36.517 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:36.517 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:36.517 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:36.517 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:36.517 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:36.517 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:36.517 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:36.517 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:36.517 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:36.517 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:36.517 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:36.517 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:36.517 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:36.517 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:36.517 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:36.517 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:36.517 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:36.517 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:36.517 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:36.517 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:36.517 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:36.517 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:36.517 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:36.517 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:36.517 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:36.517 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:36.517 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:36.517 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:36.517 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:36.517 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:36.517 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:36.517 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:36.517 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:36.517 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:36.517 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:36.517 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:36.517 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:36.517 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:36.517 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:36.517 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:36.517 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:36.517 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:36.517 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:36.517 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:36.517 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:36.517 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:36.517 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:36.517 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:36.517 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:36.517 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:36.517 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:36.517 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:36.517 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:36.517 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:36.517 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:36.517 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:36.517 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:36.517 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:36.517 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:36.517 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:36.517 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:36.517 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:36.517 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:36.517 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:36.517 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:36.517 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:36.517 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:36.517 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:36.517 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:36.517 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:36.517 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:36.517 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:36.517 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:36.517 11:34:49 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:36.517 11:34:49 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:36.517 00:03:36.517 real 0m31.718s 00:03:36.517 user 3m38.863s 00:03:36.517 sys 0m31.520s 00:03:36.517 11:34:49 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:36.517 ************************************ 00:03:36.517 END TEST build_native_dpdk 00:03:36.517 ************************************ 00:03:36.517 11:34:49 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:36.777 11:34:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:36.777 11:34:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:36.777 11:34:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:36.777 11:34:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:36.777 11:34:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:36.777 11:34:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:36.777 11:34:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:36.777 11:34:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:36.777 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:36.777 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.777 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:36.777 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:37.037 Using 'verbs' RDMA provider 00:03:48.406 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:58.399 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:58.399 Creating mk/config.mk...done. 00:03:58.399 Creating mk/cc.flags.mk...done. 00:03:58.399 Type 'make' to build. 00:03:58.399 11:35:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:58.399 11:35:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:58.399 11:35:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:58.399 11:35:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:58.399 ************************************ 00:03:58.399 START TEST make 00:03:58.399 ************************************ 00:03:58.399 11:35:11 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:58.399 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:58.399 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:58.399 meson setup builddir \ 00:03:58.399 -Dwith-libaio=enabled \ 00:03:58.399 -Dwith-liburing=enabled \ 00:03:58.399 -Dwith-libvfn=disabled \ 00:03:58.399 -Dwith-spdk=false && \ 00:03:58.399 meson compile -C builddir && \ 00:03:58.399 cd -) 00:03:58.399 make[1]: Nothing to be done for 'all'. 00:04:00.300 The Meson build system 00:04:00.300 Version: 1.5.0 00:04:00.300 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:00.300 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:00.300 Build type: native build 00:04:00.300 Project name: xnvme 00:04:00.300 Project version: 0.7.3 00:04:00.300 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:00.300 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:00.300 Host machine cpu family: x86_64 00:04:00.300 Host machine cpu: x86_64 00:04:00.300 Message: host_machine.system: linux 00:04:00.300 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:00.300 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:00.300 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:00.300 Run-time dependency threads found: YES 00:04:00.300 Has header "setupapi.h" : NO 00:04:00.300 Has header "linux/blkzoned.h" : YES 00:04:00.300 Has header "linux/blkzoned.h" : YES (cached) 00:04:00.300 Has header "libaio.h" : YES 00:04:00.300 Library aio found: YES 00:04:00.300 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:00.300 Run-time dependency liburing found: YES 2.2 00:04:00.300 Dependency libvfn skipped: feature with-libvfn disabled 00:04:00.300 Run-time dependency appleframeworks found: NO (tried framework) 00:04:00.300 Run-time dependency appleframeworks found: NO (tried framework) 00:04:00.300 Configuring xnvme_config.h using configuration 00:04:00.300 Configuring xnvme.spec using configuration 00:04:00.300 Run-time dependency bash-completion found: YES 2.11 00:04:00.300 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:00.300 Program cp found: YES (/usr/bin/cp) 00:04:00.300 Has header "winsock2.h" : NO 00:04:00.300 Has header "dbghelp.h" : NO 00:04:00.300 Library rpcrt4 found: NO 00:04:00.300 Library rt found: YES 00:04:00.300 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:00.300 Found CMake: /usr/bin/cmake (3.27.7) 00:04:00.300 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:00.300 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:00.300 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:00.300 Build targets in project: 32 00:04:00.300 00:04:00.300 xnvme 0.7.3 00:04:00.300 00:04:00.300 User defined options 00:04:00.300 with-libaio : enabled 00:04:00.300 with-liburing: enabled 00:04:00.300 with-libvfn : disabled 00:04:00.300 with-spdk : false 00:04:00.300 00:04:00.300 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:00.559 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:00.817 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:00.817 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:00.817 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:00.817 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:00.817 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:00.817 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:00.817 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:00.817 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:00.817 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:00.817 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:00.817 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:00.817 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:00.817 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:00.817 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:00.817 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:00.817 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:00.817 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:00.817 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:00.817 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:01.075 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:01.075 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:01.075 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:01.075 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:01.075 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:01.075 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:01.075 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:01.075 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:01.075 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:01.075 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:01.075 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:01.075 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:01.075 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:01.075 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:01.075 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:01.075 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:01.075 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:01.075 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:01.075 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:01.075 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:01.075 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:01.075 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:01.075 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:01.075 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:01.075 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:01.075 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:01.075 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:01.075 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:01.075 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:01.075 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:01.075 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:01.075 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:01.075 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:01.075 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:01.333 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:01.333 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:01.333 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:01.333 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:01.333 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:01.333 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:01.333 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:01.333 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:01.333 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:01.333 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:01.333 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:01.333 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:01.333 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:01.333 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:01.333 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:01.333 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:01.333 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:01.333 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:01.333 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:01.333 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:01.333 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:01.333 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:01.333 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:01.591 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:01.591 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:01.591 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:01.591 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:01.591 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:01.591 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:01.591 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:01.591 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:01.591 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:01.591 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:01.591 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:01.591 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:01.591 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:01.591 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:01.591 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:01.591 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:01.591 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:01.591 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:01.591 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:01.849 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:01.849 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:01.849 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:01.849 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:01.849 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:01.849 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:01.849 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:01.849 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:01.849 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:01.849 [105/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:01.849 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:01.849 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:01.849 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:01.849 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:01.849 [110/203] Linking target lib/libxnvme.so 00:04:01.849 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:01.849 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:01.849 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:01.849 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:01.849 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:01.849 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:01.849 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:01.849 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:01.849 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:01.849 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:01.849 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:01.849 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:01.849 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:01.849 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:01.849 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:01.849 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:01.849 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:01.849 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:01.849 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:01.849 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:01.849 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:01.849 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:02.108 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:02.108 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:02.108 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:02.108 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:02.108 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:02.108 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:02.108 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:02.108 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:02.108 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:02.108 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:02.108 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:02.108 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:02.108 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:02.108 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:02.108 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:02.108 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:02.108 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:02.108 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:02.108 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:02.366 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:02.366 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:02.366 [154/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:02.366 [155/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:02.366 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:02.366 [157/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:02.366 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:02.366 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:02.366 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:02.366 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:02.366 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:02.366 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:02.366 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:02.366 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:02.366 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:02.366 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:02.624 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:02.624 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:02.624 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:02.624 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:02.624 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:02.624 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:02.624 [174/203] Linking static target lib/libxnvme.a 00:04:02.624 [175/203] Linking target tests/xnvme_tests_buf 00:04:02.624 [176/203] Linking target tests/xnvme_tests_cli 00:04:02.624 [177/203] Linking target tests/xnvme_tests_enum 00:04:02.624 [178/203] Linking target tests/xnvme_tests_xnvme_file 00:04:02.624 [179/203] Linking target tests/xnvme_tests_async_intf 00:04:02.624 [180/203] Linking target tests/xnvme_tests_lblk 00:04:02.624 [181/203] Linking target tests/xnvme_tests_scc 00:04:02.624 [182/203] Linking target tests/xnvme_tests_ioworker 00:04:02.624 [183/203] Linking target tests/xnvme_tests_map 00:04:02.624 [184/203] Linking target tools/lblk 00:04:02.624 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:02.624 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:02.624 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:02.624 [188/203] Linking target tests/xnvme_tests_znd_append 00:04:02.624 [189/203] Linking target tests/xnvme_tests_kvs 00:04:02.881 [190/203] Linking target tools/xnvme_file 00:04:02.881 [191/203] Linking target tools/xdd 00:04:02.881 [192/203] Linking target tests/xnvme_tests_znd_state 00:04:02.881 [193/203] Linking target tools/xnvme 00:04:02.881 [194/203] Linking target tools/kvs 00:04:02.881 [195/203] Linking target examples/xnvme_enum 00:04:02.881 [196/203] Linking target examples/xnvme_dev 00:04:02.881 [197/203] Linking target examples/zoned_io_async 00:04:02.881 [198/203] Linking target examples/xnvme_io_async 00:04:02.881 [199/203] Linking target examples/xnvme_hello 00:04:02.881 [200/203] Linking target tools/zoned 00:04:02.881 [201/203] Linking target examples/xnvme_single_async 00:04:02.881 [202/203] Linking target examples/zoned_io_sync 00:04:02.881 [203/203] Linking target examples/xnvme_single_sync 00:04:02.881 INFO: autodetecting backend as ninja 00:04:02.881 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:02.881 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:34.973 CC lib/log/log_deprecated.o 00:04:34.973 CC lib/log/log.o 00:04:34.973 CC lib/ut_mock/mock.o 00:04:34.973 CC lib/log/log_flags.o 00:04:34.973 CC lib/ut/ut.o 00:04:34.973 LIB libspdk_log.a 00:04:34.973 LIB libspdk_ut.a 00:04:34.973 LIB libspdk_ut_mock.a 00:04:34.973 SO libspdk_ut.so.2.0 00:04:34.973 SO libspdk_log.so.7.0 00:04:34.973 SO libspdk_ut_mock.so.6.0 00:04:34.973 SYMLINK libspdk_ut.so 00:04:34.973 SYMLINK libspdk_log.so 00:04:34.973 SYMLINK libspdk_ut_mock.so 00:04:34.973 CC lib/util/bit_array.o 00:04:34.973 CC lib/util/cpuset.o 00:04:34.973 CC lib/util/base64.o 00:04:34.973 CC lib/util/crc16.o 00:04:34.973 CC lib/util/crc32c.o 00:04:34.973 CC lib/util/crc32.o 00:04:34.973 CC lib/dma/dma.o 00:04:34.973 CXX lib/trace_parser/trace.o 00:04:34.973 CC lib/ioat/ioat.o 00:04:34.973 CC lib/vfio_user/host/vfio_user_pci.o 00:04:34.973 CC lib/util/crc32_ieee.o 00:04:34.973 CC lib/vfio_user/host/vfio_user.o 00:04:34.973 CC lib/util/crc64.o 00:04:34.973 LIB libspdk_dma.a 00:04:34.973 CC lib/util/dif.o 00:04:34.973 CC lib/util/fd.o 00:04:34.973 SO libspdk_dma.so.5.0 00:04:34.973 CC lib/util/fd_group.o 00:04:34.973 CC lib/util/file.o 00:04:34.973 SYMLINK libspdk_dma.so 00:04:34.973 CC lib/util/hexlify.o 00:04:34.973 CC lib/util/iov.o 00:04:34.973 CC lib/util/math.o 00:04:34.973 LIB libspdk_ioat.a 00:04:34.973 SO libspdk_ioat.so.7.0 00:04:34.973 CC lib/util/net.o 00:04:34.973 CC lib/util/pipe.o 00:04:34.973 LIB libspdk_vfio_user.a 00:04:34.973 SYMLINK libspdk_ioat.so 00:04:34.973 CC lib/util/strerror_tls.o 00:04:34.973 CC lib/util/string.o 00:04:34.973 CC lib/util/uuid.o 00:04:34.973 SO libspdk_vfio_user.so.5.0 00:04:34.973 CC lib/util/xor.o 00:04:34.973 CC lib/util/zipf.o 00:04:34.973 SYMLINK libspdk_vfio_user.so 00:04:34.973 CC lib/util/md5.o 00:04:34.973 LIB libspdk_util.a 00:04:34.973 SO libspdk_util.so.10.0 00:04:34.973 LIB libspdk_trace_parser.a 00:04:34.973 SO libspdk_trace_parser.so.6.0 00:04:34.973 SYMLINK libspdk_util.so 00:04:34.973 SYMLINK libspdk_trace_parser.so 00:04:34.973 CC lib/json/json_parse.o 00:04:34.973 CC lib/idxd/idxd.o 00:04:34.973 CC lib/json/json_util.o 00:04:34.973 CC lib/idxd/idxd_user.o 00:04:34.973 CC lib/json/json_write.o 00:04:34.973 CC lib/conf/conf.o 00:04:34.974 CC lib/rdma_provider/common.o 00:04:34.974 CC lib/rdma_utils/rdma_utils.o 00:04:34.974 CC lib/env_dpdk/env.o 00:04:34.974 CC lib/vmd/vmd.o 00:04:34.974 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:34.974 CC lib/idxd/idxd_kernel.o 00:04:34.974 LIB libspdk_conf.a 00:04:34.974 CC lib/env_dpdk/memory.o 00:04:34.974 CC lib/env_dpdk/pci.o 00:04:34.974 SO libspdk_conf.so.6.0 00:04:34.974 LIB libspdk_rdma_utils.a 00:04:34.974 SO libspdk_rdma_utils.so.1.0 00:04:34.974 LIB libspdk_json.a 00:04:34.974 SYMLINK libspdk_conf.so 00:04:34.974 CC lib/env_dpdk/init.o 00:04:34.974 CC lib/env_dpdk/threads.o 00:04:34.974 SO libspdk_json.so.6.0 00:04:34.974 SYMLINK libspdk_rdma_utils.so 00:04:34.974 CC lib/vmd/led.o 00:04:34.974 LIB libspdk_rdma_provider.a 00:04:34.974 SYMLINK libspdk_json.so 00:04:34.974 SO libspdk_rdma_provider.so.6.0 00:04:34.974 SYMLINK libspdk_rdma_provider.so 00:04:34.974 CC lib/env_dpdk/pci_ioat.o 00:04:34.974 CC lib/env_dpdk/pci_virtio.o 00:04:34.974 CC lib/env_dpdk/pci_vmd.o 00:04:34.974 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.974 CC lib/env_dpdk/pci_idxd.o 00:04:34.974 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.974 CC lib/env_dpdk/pci_event.o 00:04:34.974 CC lib/env_dpdk/sigbus_handler.o 00:04:34.974 CC lib/env_dpdk/pci_dpdk.o 00:04:34.974 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:34.974 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:34.974 LIB libspdk_idxd.a 00:04:34.974 LIB libspdk_vmd.a 00:04:34.974 SO libspdk_idxd.so.12.1 00:04:34.974 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.974 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.974 SO libspdk_vmd.so.6.0 00:04:34.974 SYMLINK libspdk_idxd.so 00:04:34.974 SYMLINK libspdk_vmd.so 00:04:34.974 LIB libspdk_jsonrpc.a 00:04:34.974 SO libspdk_jsonrpc.so.6.0 00:04:34.974 SYMLINK libspdk_jsonrpc.so 00:04:34.974 CC lib/rpc/rpc.o 00:04:34.974 LIB libspdk_env_dpdk.a 00:04:34.974 LIB libspdk_rpc.a 00:04:34.974 SO libspdk_rpc.so.6.0 00:04:34.974 SO libspdk_env_dpdk.so.15.0 00:04:34.974 SYMLINK libspdk_rpc.so 00:04:34.974 SYMLINK libspdk_env_dpdk.so 00:04:34.974 CC lib/trace/trace.o 00:04:34.974 CC lib/trace/trace_rpc.o 00:04:34.974 CC lib/trace/trace_flags.o 00:04:34.974 CC lib/notify/notify.o 00:04:34.974 CC lib/notify/notify_rpc.o 00:04:34.974 CC lib/keyring/keyring.o 00:04:34.974 CC lib/keyring/keyring_rpc.o 00:04:34.974 LIB libspdk_notify.a 00:04:34.974 SO libspdk_notify.so.6.0 00:04:34.974 LIB libspdk_trace.a 00:04:34.974 SO libspdk_trace.so.11.0 00:04:34.974 LIB libspdk_keyring.a 00:04:34.974 SYMLINK libspdk_notify.so 00:04:34.974 SO libspdk_keyring.so.2.0 00:04:34.974 SYMLINK libspdk_trace.so 00:04:34.974 SYMLINK libspdk_keyring.so 00:04:34.974 CC lib/sock/sock.o 00:04:34.974 CC lib/sock/sock_rpc.o 00:04:34.974 CC lib/thread/thread.o 00:04:34.974 CC lib/thread/iobuf.o 00:04:34.974 LIB libspdk_sock.a 00:04:34.974 SO libspdk_sock.so.10.0 00:04:35.232 SYMLINK libspdk_sock.so 00:04:35.232 CC lib/nvme/nvme_ctrlr.o 00:04:35.232 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:35.232 CC lib/nvme/nvme_ns_cmd.o 00:04:35.232 CC lib/nvme/nvme_fabric.o 00:04:35.232 CC lib/nvme/nvme_ns.o 00:04:35.232 CC lib/nvme/nvme_qpair.o 00:04:35.232 CC lib/nvme/nvme_pcie.o 00:04:35.232 CC lib/nvme/nvme_pcie_common.o 00:04:35.491 CC lib/nvme/nvme.o 00:04:35.749 CC lib/nvme/nvme_quirks.o 00:04:36.007 CC lib/nvme/nvme_transport.o 00:04:36.007 CC lib/nvme/nvme_discovery.o 00:04:36.007 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:36.007 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:36.286 CC lib/nvme/nvme_tcp.o 00:04:36.286 LIB libspdk_thread.a 00:04:36.286 CC lib/nvme/nvme_opal.o 00:04:36.286 SO libspdk_thread.so.10.1 00:04:36.286 CC lib/nvme/nvme_io_msg.o 00:04:36.286 SYMLINK libspdk_thread.so 00:04:36.286 CC lib/nvme/nvme_poll_group.o 00:04:36.286 CC lib/nvme/nvme_zns.o 00:04:36.605 CC lib/nvme/nvme_stubs.o 00:04:36.605 CC lib/accel/accel.o 00:04:36.605 CC lib/nvme/nvme_auth.o 00:04:36.605 CC lib/nvme/nvme_cuse.o 00:04:36.865 CC lib/blob/blobstore.o 00:04:36.865 CC lib/blob/request.o 00:04:36.865 CC lib/blob/zeroes.o 00:04:36.865 CC lib/init/json_config.o 00:04:36.865 CC lib/init/subsystem.o 00:04:36.865 CC lib/init/subsystem_rpc.o 00:04:37.124 CC lib/blob/blob_bs_dev.o 00:04:37.124 CC lib/init/rpc.o 00:04:37.124 CC lib/accel/accel_rpc.o 00:04:37.124 CC lib/nvme/nvme_rdma.o 00:04:37.124 LIB libspdk_init.a 00:04:37.124 SO libspdk_init.so.6.0 00:04:37.382 CC lib/virtio/virtio.o 00:04:37.382 SYMLINK libspdk_init.so 00:04:37.382 CC lib/fsdev/fsdev.o 00:04:37.382 CC lib/event/app.o 00:04:37.382 CC lib/accel/accel_sw.o 00:04:37.382 CC lib/fsdev/fsdev_io.o 00:04:37.382 CC lib/virtio/virtio_vhost_user.o 00:04:37.641 CC lib/fsdev/fsdev_rpc.o 00:04:37.641 CC lib/virtio/virtio_vfio_user.o 00:04:37.641 CC lib/event/reactor.o 00:04:37.641 CC lib/virtio/virtio_pci.o 00:04:37.641 LIB libspdk_accel.a 00:04:37.901 SO libspdk_accel.so.16.0 00:04:37.901 CC lib/event/log_rpc.o 00:04:37.901 LIB libspdk_fsdev.a 00:04:37.901 SYMLINK libspdk_accel.so 00:04:37.901 CC lib/event/app_rpc.o 00:04:37.901 CC lib/event/scheduler_static.o 00:04:37.901 SO libspdk_fsdev.so.1.0 00:04:37.901 SYMLINK libspdk_fsdev.so 00:04:37.901 LIB libspdk_virtio.a 00:04:37.901 CC lib/bdev/bdev.o 00:04:37.901 CC lib/bdev/bdev_zone.o 00:04:37.901 CC lib/bdev/bdev_rpc.o 00:04:37.901 CC lib/bdev/part.o 00:04:38.159 SO libspdk_virtio.so.7.0 00:04:38.159 SYMLINK libspdk_virtio.so 00:04:38.159 CC lib/bdev/scsi_nvme.o 00:04:38.159 LIB libspdk_event.a 00:04:38.159 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:38.159 SO libspdk_event.so.14.0 00:04:38.159 SYMLINK libspdk_event.so 00:04:38.417 LIB libspdk_nvme.a 00:04:38.675 SO libspdk_nvme.so.14.0 00:04:38.675 LIB libspdk_fuse_dispatcher.a 00:04:38.675 SO libspdk_fuse_dispatcher.so.1.0 00:04:38.675 SYMLINK libspdk_fuse_dispatcher.so 00:04:38.675 SYMLINK libspdk_nvme.so 00:04:40.051 LIB libspdk_blob.a 00:04:40.051 SO libspdk_blob.so.11.0 00:04:40.051 SYMLINK libspdk_blob.so 00:04:40.051 CC lib/blobfs/tree.o 00:04:40.051 CC lib/blobfs/blobfs.o 00:04:40.051 CC lib/lvol/lvol.o 00:04:40.624 LIB libspdk_bdev.a 00:04:40.624 SO libspdk_bdev.so.16.0 00:04:40.624 LIB libspdk_blobfs.a 00:04:40.882 SYMLINK libspdk_bdev.so 00:04:40.882 SO libspdk_blobfs.so.10.0 00:04:40.882 SYMLINK libspdk_blobfs.so 00:04:40.882 CC lib/ublk/ublk.o 00:04:40.882 CC lib/ublk/ublk_rpc.o 00:04:40.882 CC lib/scsi/lun.o 00:04:40.882 CC lib/scsi/port.o 00:04:40.882 CC lib/scsi/dev.o 00:04:40.882 CC lib/scsi/scsi.o 00:04:40.882 CC lib/ftl/ftl_core.o 00:04:40.882 CC lib/nvmf/ctrlr.o 00:04:40.882 CC lib/nbd/nbd.o 00:04:41.140 CC lib/nbd/nbd_rpc.o 00:04:41.140 CC lib/nvmf/ctrlr_discovery.o 00:04:41.140 LIB libspdk_lvol.a 00:04:41.140 CC lib/nvmf/ctrlr_bdev.o 00:04:41.140 SO libspdk_lvol.so.10.0 00:04:41.140 CC lib/scsi/scsi_bdev.o 00:04:41.140 SYMLINK libspdk_lvol.so 00:04:41.140 CC lib/ftl/ftl_init.o 00:04:41.140 CC lib/scsi/scsi_pr.o 00:04:41.140 CC lib/scsi/scsi_rpc.o 00:04:41.401 CC lib/ftl/ftl_layout.o 00:04:41.401 CC lib/nvmf/subsystem.o 00:04:41.401 LIB libspdk_nbd.a 00:04:41.401 CC lib/scsi/task.o 00:04:41.401 SO libspdk_nbd.so.7.0 00:04:41.401 CC lib/nvmf/nvmf.o 00:04:41.401 LIB libspdk_ublk.a 00:04:41.401 SYMLINK libspdk_nbd.so 00:04:41.401 CC lib/nvmf/nvmf_rpc.o 00:04:41.401 SO libspdk_ublk.so.3.0 00:04:41.401 CC lib/nvmf/transport.o 00:04:41.401 SYMLINK libspdk_ublk.so 00:04:41.401 CC lib/nvmf/tcp.o 00:04:41.401 CC lib/nvmf/stubs.o 00:04:41.663 CC lib/ftl/ftl_debug.o 00:04:41.663 LIB libspdk_scsi.a 00:04:41.663 SO libspdk_scsi.so.9.0 00:04:41.663 CC lib/ftl/ftl_io.o 00:04:41.663 CC lib/nvmf/mdns_server.o 00:04:41.663 SYMLINK libspdk_scsi.so 00:04:41.663 CC lib/nvmf/rdma.o 00:04:41.922 CC lib/nvmf/auth.o 00:04:41.922 CC lib/ftl/ftl_sb.o 00:04:41.922 CC lib/ftl/ftl_l2p.o 00:04:42.181 CC lib/ftl/ftl_l2p_flat.o 00:04:42.181 CC lib/ftl/ftl_nv_cache.o 00:04:42.181 CC lib/ftl/ftl_band.o 00:04:42.181 CC lib/ftl/ftl_band_ops.o 00:04:42.181 CC lib/ftl/ftl_writer.o 00:04:42.181 CC lib/ftl/ftl_rq.o 00:04:42.439 CC lib/ftl/ftl_reloc.o 00:04:42.439 CC lib/ftl/ftl_l2p_cache.o 00:04:42.439 CC lib/ftl/ftl_p2l.o 00:04:42.439 CC lib/ftl/ftl_p2l_log.o 00:04:42.439 CC lib/ftl/mngt/ftl_mngt.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:42.698 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:42.956 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:42.956 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.956 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.956 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.956 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.956 CC lib/ftl/utils/ftl_conf.o 00:04:42.956 CC lib/ftl/utils/ftl_md.o 00:04:42.956 CC lib/ftl/utils/ftl_mempool.o 00:04:42.956 CC lib/ftl/utils/ftl_bitmap.o 00:04:43.215 CC lib/ftl/utils/ftl_property.o 00:04:43.215 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:43.215 CC lib/iscsi/conn.o 00:04:43.215 CC lib/vhost/vhost.o 00:04:43.215 CC lib/vhost/vhost_rpc.o 00:04:43.215 CC lib/vhost/vhost_scsi.o 00:04:43.215 CC lib/vhost/vhost_blk.o 00:04:43.215 CC lib/vhost/rte_vhost_user.o 00:04:43.215 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:43.472 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:43.472 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:43.472 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:43.473 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:43.473 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:43.731 CC lib/iscsi/init_grp.o 00:04:43.731 CC lib/iscsi/iscsi.o 00:04:43.731 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:43.731 CC lib/iscsi/param.o 00:04:43.731 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:43.731 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:43.731 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:43.989 LIB libspdk_nvmf.a 00:04:43.989 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:43.989 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:43.989 CC lib/iscsi/portal_grp.o 00:04:43.989 CC lib/ftl/base/ftl_base_dev.o 00:04:43.989 SO libspdk_nvmf.so.19.0 00:04:43.989 CC lib/iscsi/tgt_node.o 00:04:43.989 CC lib/ftl/base/ftl_base_bdev.o 00:04:43.989 CC lib/iscsi/iscsi_subsystem.o 00:04:43.989 CC lib/ftl/ftl_trace.o 00:04:43.989 CC lib/iscsi/iscsi_rpc.o 00:04:43.989 LIB libspdk_vhost.a 00:04:44.246 CC lib/iscsi/task.o 00:04:44.246 SO libspdk_vhost.so.8.0 00:04:44.246 SYMLINK libspdk_vhost.so 00:04:44.246 LIB libspdk_ftl.a 00:04:44.246 SYMLINK libspdk_nvmf.so 00:04:44.504 SO libspdk_ftl.so.9.0 00:04:44.504 SYMLINK libspdk_ftl.so 00:04:44.761 LIB libspdk_iscsi.a 00:04:44.761 SO libspdk_iscsi.so.8.0 00:04:45.020 SYMLINK libspdk_iscsi.so 00:04:45.278 CC module/env_dpdk/env_dpdk_rpc.o 00:04:45.278 CC module/blob/bdev/blob_bdev.o 00:04:45.278 CC module/sock/posix/posix.o 00:04:45.278 CC module/accel/error/accel_error.o 00:04:45.278 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:45.278 CC module/fsdev/aio/fsdev_aio.o 00:04:45.278 CC module/accel/ioat/accel_ioat.o 00:04:45.278 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:45.278 CC module/keyring/file/keyring.o 00:04:45.278 CC module/scheduler/gscheduler/gscheduler.o 00:04:45.278 LIB libspdk_env_dpdk_rpc.a 00:04:45.278 SO libspdk_env_dpdk_rpc.so.6.0 00:04:45.278 LIB libspdk_scheduler_dpdk_governor.a 00:04:45.278 CC module/keyring/file/keyring_rpc.o 00:04:45.278 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:45.537 SYMLINK libspdk_env_dpdk_rpc.so 00:04:45.537 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:45.537 LIB libspdk_scheduler_gscheduler.a 00:04:45.537 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:45.537 SO libspdk_scheduler_gscheduler.so.4.0 00:04:45.537 CC module/accel/error/accel_error_rpc.o 00:04:45.537 CC module/accel/ioat/accel_ioat_rpc.o 00:04:45.537 LIB libspdk_scheduler_dynamic.a 00:04:45.537 LIB libspdk_keyring_file.a 00:04:45.537 SYMLINK libspdk_scheduler_gscheduler.so 00:04:45.537 SO libspdk_scheduler_dynamic.so.4.0 00:04:45.537 SO libspdk_keyring_file.so.2.0 00:04:45.537 LIB libspdk_blob_bdev.a 00:04:45.537 SO libspdk_blob_bdev.so.11.0 00:04:45.537 SYMLINK libspdk_keyring_file.so 00:04:45.537 CC module/fsdev/aio/linux_aio_mgr.o 00:04:45.537 SYMLINK libspdk_scheduler_dynamic.so 00:04:45.537 LIB libspdk_accel_error.a 00:04:45.537 LIB libspdk_accel_ioat.a 00:04:45.537 SYMLINK libspdk_blob_bdev.so 00:04:45.537 CC module/accel/dsa/accel_dsa.o 00:04:45.537 CC module/accel/dsa/accel_dsa_rpc.o 00:04:45.537 SO libspdk_accel_ioat.so.6.0 00:04:45.537 SO libspdk_accel_error.so.2.0 00:04:45.537 CC module/keyring/linux/keyring.o 00:04:45.796 SYMLINK libspdk_accel_error.so 00:04:45.796 SYMLINK libspdk_accel_ioat.so 00:04:45.796 CC module/accel/iaa/accel_iaa.o 00:04:45.796 CC module/accel/iaa/accel_iaa_rpc.o 00:04:45.796 CC module/keyring/linux/keyring_rpc.o 00:04:45.796 CC module/bdev/delay/vbdev_delay.o 00:04:45.796 CC module/bdev/gpt/gpt.o 00:04:45.796 CC module/bdev/error/vbdev_error.o 00:04:45.796 LIB libspdk_sock_posix.a 00:04:45.796 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.796 SO libspdk_sock_posix.so.6.0 00:04:45.796 CC module/blobfs/bdev/blobfs_bdev.o 00:04:45.796 LIB libspdk_accel_dsa.a 00:04:45.796 LIB libspdk_accel_iaa.a 00:04:45.796 LIB libspdk_keyring_linux.a 00:04:45.796 SO libspdk_accel_dsa.so.5.0 00:04:45.796 SO libspdk_accel_iaa.so.3.0 00:04:46.054 SO libspdk_keyring_linux.so.1.0 00:04:46.054 LIB libspdk_fsdev_aio.a 00:04:46.054 SYMLINK libspdk_sock_posix.so 00:04:46.054 SYMLINK libspdk_accel_dsa.so 00:04:46.054 SYMLINK libspdk_keyring_linux.so 00:04:46.054 SYMLINK libspdk_accel_iaa.so 00:04:46.054 CC module/bdev/gpt/vbdev_gpt.o 00:04:46.054 SO libspdk_fsdev_aio.so.1.0 00:04:46.054 SYMLINK libspdk_fsdev_aio.so 00:04:46.054 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:46.054 LIB libspdk_bdev_error.a 00:04:46.054 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:46.054 CC module/bdev/lvol/vbdev_lvol.o 00:04:46.054 SO libspdk_bdev_error.so.6.0 00:04:46.054 CC module/bdev/null/bdev_null.o 00:04:46.054 CC module/bdev/malloc/bdev_malloc.o 00:04:46.054 SYMLINK libspdk_bdev_error.so 00:04:46.054 CC module/bdev/nvme/bdev_nvme.o 00:04:46.054 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:46.054 CC module/bdev/passthru/vbdev_passthru.o 00:04:46.054 CC module/bdev/raid/bdev_raid.o 00:04:46.311 LIB libspdk_bdev_gpt.a 00:04:46.311 LIB libspdk_blobfs_bdev.a 00:04:46.311 LIB libspdk_bdev_delay.a 00:04:46.311 SO libspdk_bdev_gpt.so.6.0 00:04:46.311 SO libspdk_blobfs_bdev.so.6.0 00:04:46.311 SO libspdk_bdev_delay.so.6.0 00:04:46.311 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:46.311 SYMLINK libspdk_blobfs_bdev.so 00:04:46.311 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:46.311 SYMLINK libspdk_bdev_gpt.so 00:04:46.311 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:46.311 SYMLINK libspdk_bdev_delay.so 00:04:46.311 CC module/bdev/nvme/nvme_rpc.o 00:04:46.311 CC module/bdev/null/bdev_null_rpc.o 00:04:46.571 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.571 LIB libspdk_bdev_malloc.a 00:04:46.571 LIB libspdk_bdev_passthru.a 00:04:46.571 SO libspdk_bdev_malloc.so.6.0 00:04:46.571 SO libspdk_bdev_passthru.so.6.0 00:04:46.571 CC module/bdev/nvme/vbdev_opal.o 00:04:46.571 SYMLINK libspdk_bdev_malloc.so 00:04:46.571 LIB libspdk_bdev_null.a 00:04:46.571 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.571 SYMLINK libspdk_bdev_passthru.so 00:04:46.571 SO libspdk_bdev_null.so.6.0 00:04:46.571 SYMLINK libspdk_bdev_null.so 00:04:46.571 CC module/bdev/split/vbdev_split.o 00:04:46.571 LIB libspdk_bdev_lvol.a 00:04:46.571 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:46.571 CC module/bdev/split/vbdev_split_rpc.o 00:04:46.829 SO libspdk_bdev_lvol.so.6.0 00:04:46.829 CC module/bdev/xnvme/bdev_xnvme.o 00:04:46.829 CC module/bdev/aio/bdev_aio.o 00:04:46.829 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:46.829 SYMLINK libspdk_bdev_lvol.so 00:04:46.829 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.829 CC module/bdev/raid/bdev_raid_rpc.o 00:04:46.829 LIB libspdk_bdev_split.a 00:04:46.829 SO libspdk_bdev_split.so.6.0 00:04:46.829 SYMLINK libspdk_bdev_split.so 00:04:46.829 LIB libspdk_bdev_xnvme.a 00:04:46.829 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.829 SO libspdk_bdev_xnvme.so.3.0 00:04:47.086 CC module/bdev/raid/bdev_raid_sb.o 00:04:47.086 CC module/bdev/raid/raid0.o 00:04:47.086 SYMLINK libspdk_bdev_xnvme.so 00:04:47.086 CC module/bdev/aio/bdev_aio_rpc.o 00:04:47.086 CC module/bdev/iscsi/bdev_iscsi.o 00:04:47.086 CC module/bdev/ftl/bdev_ftl.o 00:04:47.086 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:47.086 CC module/bdev/raid/raid1.o 00:04:47.086 LIB libspdk_bdev_zone_block.a 00:04:47.086 SO libspdk_bdev_zone_block.so.6.0 00:04:47.086 LIB libspdk_bdev_aio.a 00:04:47.086 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:47.086 SYMLINK libspdk_bdev_zone_block.so 00:04:47.086 SO libspdk_bdev_aio.so.6.0 00:04:47.086 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:47.086 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:47.086 SYMLINK libspdk_bdev_aio.so 00:04:47.086 CC module/bdev/raid/concat.o 00:04:47.086 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:47.344 LIB libspdk_bdev_ftl.a 00:04:47.344 SO libspdk_bdev_ftl.so.6.0 00:04:47.344 LIB libspdk_bdev_raid.a 00:04:47.344 LIB libspdk_bdev_iscsi.a 00:04:47.344 SYMLINK libspdk_bdev_ftl.so 00:04:47.344 SO libspdk_bdev_iscsi.so.6.0 00:04:47.344 SO libspdk_bdev_raid.so.6.0 00:04:47.344 SYMLINK libspdk_bdev_iscsi.so 00:04:47.344 SYMLINK libspdk_bdev_raid.so 00:04:47.602 LIB libspdk_bdev_virtio.a 00:04:47.602 SO libspdk_bdev_virtio.so.6.0 00:04:47.602 SYMLINK libspdk_bdev_virtio.so 00:04:48.539 LIB libspdk_bdev_nvme.a 00:04:48.539 SO libspdk_bdev_nvme.so.7.0 00:04:48.539 SYMLINK libspdk_bdev_nvme.so 00:04:48.797 CC module/event/subsystems/scheduler/scheduler.o 00:04:48.797 CC module/event/subsystems/fsdev/fsdev.o 00:04:48.797 CC module/event/subsystems/vmd/vmd.o 00:04:48.797 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:48.797 CC module/event/subsystems/keyring/keyring.o 00:04:48.797 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:48.797 CC module/event/subsystems/iobuf/iobuf.o 00:04:48.797 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:48.797 CC module/event/subsystems/sock/sock.o 00:04:49.057 LIB libspdk_event_fsdev.a 00:04:49.057 LIB libspdk_event_sock.a 00:04:49.057 LIB libspdk_event_vhost_blk.a 00:04:49.057 LIB libspdk_event_scheduler.a 00:04:49.057 SO libspdk_event_fsdev.so.1.0 00:04:49.057 SO libspdk_event_sock.so.5.0 00:04:49.057 SO libspdk_event_vhost_blk.so.3.0 00:04:49.057 LIB libspdk_event_vmd.a 00:04:49.057 SO libspdk_event_scheduler.so.4.0 00:04:49.057 LIB libspdk_event_iobuf.a 00:04:49.057 LIB libspdk_event_keyring.a 00:04:49.057 SO libspdk_event_vmd.so.6.0 00:04:49.057 SYMLINK libspdk_event_fsdev.so 00:04:49.057 SO libspdk_event_iobuf.so.3.0 00:04:49.057 SYMLINK libspdk_event_sock.so 00:04:49.057 SYMLINK libspdk_event_vhost_blk.so 00:04:49.057 SO libspdk_event_keyring.so.1.0 00:04:49.057 SYMLINK libspdk_event_scheduler.so 00:04:49.057 SYMLINK libspdk_event_vmd.so 00:04:49.057 SYMLINK libspdk_event_iobuf.so 00:04:49.057 SYMLINK libspdk_event_keyring.so 00:04:49.317 CC module/event/subsystems/accel/accel.o 00:04:49.317 LIB libspdk_event_accel.a 00:04:49.317 SO libspdk_event_accel.so.6.0 00:04:49.574 SYMLINK libspdk_event_accel.so 00:04:49.574 CC module/event/subsystems/bdev/bdev.o 00:04:49.832 LIB libspdk_event_bdev.a 00:04:49.832 SO libspdk_event_bdev.so.6.0 00:04:49.832 SYMLINK libspdk_event_bdev.so 00:04:50.089 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:50.089 CC module/event/subsystems/ublk/ublk.o 00:04:50.089 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:50.089 CC module/event/subsystems/nbd/nbd.o 00:04:50.089 CC module/event/subsystems/scsi/scsi.o 00:04:50.089 LIB libspdk_event_ublk.a 00:04:50.089 LIB libspdk_event_nbd.a 00:04:50.089 SO libspdk_event_ublk.so.3.0 00:04:50.089 LIB libspdk_event_scsi.a 00:04:50.347 SO libspdk_event_nbd.so.6.0 00:04:50.347 SO libspdk_event_scsi.so.6.0 00:04:50.347 SYMLINK libspdk_event_ublk.so 00:04:50.347 LIB libspdk_event_nvmf.a 00:04:50.347 SYMLINK libspdk_event_nbd.so 00:04:50.347 SYMLINK libspdk_event_scsi.so 00:04:50.347 SO libspdk_event_nvmf.so.6.0 00:04:50.347 SYMLINK libspdk_event_nvmf.so 00:04:50.605 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.605 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:50.605 LIB libspdk_event_vhost_scsi.a 00:04:50.605 LIB libspdk_event_iscsi.a 00:04:50.605 SO libspdk_event_vhost_scsi.so.3.0 00:04:50.605 SO libspdk_event_iscsi.so.6.0 00:04:50.605 SYMLINK libspdk_event_vhost_scsi.so 00:04:50.605 SYMLINK libspdk_event_iscsi.so 00:04:50.863 SO libspdk.so.6.0 00:04:50.863 SYMLINK libspdk.so 00:04:51.122 TEST_HEADER include/spdk/accel.h 00:04:51.122 TEST_HEADER include/spdk/accel_module.h 00:04:51.122 CC test/rpc_client/rpc_client_test.o 00:04:51.122 TEST_HEADER include/spdk/assert.h 00:04:51.122 TEST_HEADER include/spdk/barrier.h 00:04:51.122 TEST_HEADER include/spdk/base64.h 00:04:51.122 CXX app/trace/trace.o 00:04:51.122 TEST_HEADER include/spdk/bdev.h 00:04:51.122 TEST_HEADER include/spdk/bdev_module.h 00:04:51.122 TEST_HEADER include/spdk/bdev_zone.h 00:04:51.122 TEST_HEADER include/spdk/bit_array.h 00:04:51.122 TEST_HEADER include/spdk/bit_pool.h 00:04:51.122 TEST_HEADER include/spdk/blob_bdev.h 00:04:51.122 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:51.122 TEST_HEADER include/spdk/blobfs.h 00:04:51.122 TEST_HEADER include/spdk/blob.h 00:04:51.122 TEST_HEADER include/spdk/conf.h 00:04:51.122 TEST_HEADER include/spdk/config.h 00:04:51.122 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:51.122 TEST_HEADER include/spdk/cpuset.h 00:04:51.122 TEST_HEADER include/spdk/crc16.h 00:04:51.122 TEST_HEADER include/spdk/crc32.h 00:04:51.122 TEST_HEADER include/spdk/crc64.h 00:04:51.122 TEST_HEADER include/spdk/dif.h 00:04:51.122 TEST_HEADER include/spdk/dma.h 00:04:51.122 TEST_HEADER include/spdk/endian.h 00:04:51.122 TEST_HEADER include/spdk/env_dpdk.h 00:04:51.122 TEST_HEADER include/spdk/env.h 00:04:51.122 TEST_HEADER include/spdk/event.h 00:04:51.122 TEST_HEADER include/spdk/fd_group.h 00:04:51.122 TEST_HEADER include/spdk/fd.h 00:04:51.122 CC examples/ioat/perf/perf.o 00:04:51.122 TEST_HEADER include/spdk/file.h 00:04:51.122 CC examples/util/zipf/zipf.o 00:04:51.122 TEST_HEADER include/spdk/fsdev.h 00:04:51.122 CC test/thread/poller_perf/poller_perf.o 00:04:51.122 TEST_HEADER include/spdk/fsdev_module.h 00:04:51.122 TEST_HEADER include/spdk/ftl.h 00:04:51.122 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:51.122 TEST_HEADER include/spdk/gpt_spec.h 00:04:51.122 TEST_HEADER include/spdk/hexlify.h 00:04:51.122 TEST_HEADER include/spdk/histogram_data.h 00:04:51.122 TEST_HEADER include/spdk/idxd.h 00:04:51.122 CC test/app/bdev_svc/bdev_svc.o 00:04:51.122 TEST_HEADER include/spdk/idxd_spec.h 00:04:51.122 TEST_HEADER include/spdk/init.h 00:04:51.122 CC test/dma/test_dma/test_dma.o 00:04:51.122 TEST_HEADER include/spdk/ioat.h 00:04:51.122 TEST_HEADER include/spdk/ioat_spec.h 00:04:51.122 TEST_HEADER include/spdk/iscsi_spec.h 00:04:51.122 TEST_HEADER include/spdk/json.h 00:04:51.122 TEST_HEADER include/spdk/jsonrpc.h 00:04:51.122 TEST_HEADER include/spdk/keyring.h 00:04:51.122 TEST_HEADER include/spdk/keyring_module.h 00:04:51.122 TEST_HEADER include/spdk/likely.h 00:04:51.122 TEST_HEADER include/spdk/log.h 00:04:51.122 TEST_HEADER include/spdk/lvol.h 00:04:51.122 TEST_HEADER include/spdk/md5.h 00:04:51.122 TEST_HEADER include/spdk/memory.h 00:04:51.122 TEST_HEADER include/spdk/mmio.h 00:04:51.122 TEST_HEADER include/spdk/nbd.h 00:04:51.122 TEST_HEADER include/spdk/net.h 00:04:51.122 TEST_HEADER include/spdk/notify.h 00:04:51.122 TEST_HEADER include/spdk/nvme.h 00:04:51.122 TEST_HEADER include/spdk/nvme_intel.h 00:04:51.122 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:51.122 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:51.122 TEST_HEADER include/spdk/nvme_spec.h 00:04:51.122 TEST_HEADER include/spdk/nvme_zns.h 00:04:51.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:51.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:51.122 TEST_HEADER include/spdk/nvmf.h 00:04:51.122 TEST_HEADER include/spdk/nvmf_spec.h 00:04:51.122 TEST_HEADER include/spdk/nvmf_transport.h 00:04:51.122 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.122 TEST_HEADER include/spdk/opal.h 00:04:51.122 TEST_HEADER include/spdk/opal_spec.h 00:04:51.122 TEST_HEADER include/spdk/pci_ids.h 00:04:51.122 TEST_HEADER include/spdk/pipe.h 00:04:51.122 TEST_HEADER include/spdk/queue.h 00:04:51.122 TEST_HEADER include/spdk/reduce.h 00:04:51.122 TEST_HEADER include/spdk/rpc.h 00:04:51.122 TEST_HEADER include/spdk/scheduler.h 00:04:51.122 TEST_HEADER include/spdk/scsi.h 00:04:51.122 TEST_HEADER include/spdk/scsi_spec.h 00:04:51.122 LINK rpc_client_test 00:04:51.122 TEST_HEADER include/spdk/sock.h 00:04:51.122 TEST_HEADER include/spdk/stdinc.h 00:04:51.122 TEST_HEADER include/spdk/string.h 00:04:51.122 TEST_HEADER include/spdk/thread.h 00:04:51.122 TEST_HEADER include/spdk/trace.h 00:04:51.122 TEST_HEADER include/spdk/trace_parser.h 00:04:51.122 TEST_HEADER include/spdk/tree.h 00:04:51.122 TEST_HEADER include/spdk/ublk.h 00:04:51.122 TEST_HEADER include/spdk/util.h 00:04:51.122 TEST_HEADER include/spdk/uuid.h 00:04:51.122 TEST_HEADER include/spdk/version.h 00:04:51.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:51.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:51.122 TEST_HEADER include/spdk/vhost.h 00:04:51.122 LINK poller_perf 00:04:51.122 LINK interrupt_tgt 00:04:51.122 TEST_HEADER include/spdk/vmd.h 00:04:51.122 TEST_HEADER include/spdk/xor.h 00:04:51.122 TEST_HEADER include/spdk/zipf.h 00:04:51.122 LINK zipf 00:04:51.122 CXX test/cpp_headers/accel.o 00:04:51.381 LINK bdev_svc 00:04:51.381 LINK ioat_perf 00:04:51.381 LINK mem_callbacks 00:04:51.381 LINK spdk_trace 00:04:51.381 CC examples/ioat/verify/verify.o 00:04:51.381 CXX test/cpp_headers/accel_module.o 00:04:51.381 CC test/app/histogram_perf/histogram_perf.o 00:04:51.381 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.381 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:51.381 CC test/env/vtophys/vtophys.o 00:04:51.381 CC test/event/event_perf/event_perf.o 00:04:51.381 CC test/event/reactor/reactor.o 00:04:51.641 CC app/trace_record/trace_record.o 00:04:51.641 LINK verify 00:04:51.641 CXX test/cpp_headers/assert.o 00:04:51.641 LINK histogram_perf 00:04:51.641 LINK test_dma 00:04:51.641 LINK vtophys 00:04:51.641 LINK event_perf 00:04:51.641 LINK reactor 00:04:51.641 CXX test/cpp_headers/barrier.o 00:04:51.900 CC app/nvmf_tgt/nvmf_main.o 00:04:51.900 LINK spdk_trace_record 00:04:51.900 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:51.900 CC test/event/reactor_perf/reactor_perf.o 00:04:51.900 CXX test/cpp_headers/base64.o 00:04:51.900 CC examples/thread/thread/thread_ex.o 00:04:51.900 CC test/event/app_repeat/app_repeat.o 00:04:51.900 LINK nvme_fuzz 00:04:51.900 CC app/iscsi_tgt/iscsi_tgt.o 00:04:51.900 CXX test/cpp_headers/bdev.o 00:04:51.900 LINK env_dpdk_post_init 00:04:51.900 LINK nvmf_tgt 00:04:51.900 LINK reactor_perf 00:04:51.900 LINK app_repeat 00:04:52.159 LINK thread 00:04:52.159 CXX test/cpp_headers/bdev_module.o 00:04:52.159 LINK iscsi_tgt 00:04:52.159 CC test/env/memory/memory_ut.o 00:04:52.159 CC test/event/scheduler/scheduler.o 00:04:52.159 CC examples/sock/hello_world/hello_sock.o 00:04:52.159 CC test/env/pci/pci_ut.o 00:04:52.159 CC test/app/jsoncat/jsoncat.o 00:04:52.159 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:52.159 CXX test/cpp_headers/bdev_zone.o 00:04:52.159 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:52.417 LINK jsoncat 00:04:52.417 CC app/spdk_tgt/spdk_tgt.o 00:04:52.418 CXX test/cpp_headers/bit_array.o 00:04:52.418 LINK scheduler 00:04:52.418 LINK hello_sock 00:04:52.418 CXX test/cpp_headers/bit_pool.o 00:04:52.418 LINK pci_ut 00:04:52.418 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.418 CC examples/vmd/led/led.o 00:04:52.418 LINK spdk_tgt 00:04:52.418 CC test/app/stub/stub.o 00:04:52.676 CXX test/cpp_headers/blob_bdev.o 00:04:52.676 LINK lsvmd 00:04:52.676 LINK led 00:04:52.676 LINK vhost_fuzz 00:04:52.676 CC examples/idxd/perf/perf.o 00:04:52.676 LINK stub 00:04:52.676 CXX test/cpp_headers/blobfs_bdev.o 00:04:52.676 CXX test/cpp_headers/blobfs.o 00:04:52.676 CC app/spdk_lspci/spdk_lspci.o 00:04:52.934 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:52.934 LINK memory_ut 00:04:52.934 CC examples/accel/perf/accel_perf.o 00:04:52.934 LINK spdk_lspci 00:04:52.934 CXX test/cpp_headers/blob.o 00:04:52.934 CC examples/blob/hello_world/hello_blob.o 00:04:52.934 CC examples/nvme/hello_world/hello_world.o 00:04:52.934 CXX test/cpp_headers/conf.o 00:04:52.934 CC examples/nvme/reconnect/reconnect.o 00:04:52.934 LINK idxd_perf 00:04:53.192 CXX test/cpp_headers/config.o 00:04:53.192 LINK hello_fsdev 00:04:53.192 CC app/spdk_nvme_perf/perf.o 00:04:53.192 LINK hello_world 00:04:53.192 CC app/spdk_nvme_identify/identify.o 00:04:53.192 CXX test/cpp_headers/cpuset.o 00:04:53.192 LINK hello_blob 00:04:53.192 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:53.192 LINK iscsi_fuzz 00:04:53.192 CXX test/cpp_headers/crc16.o 00:04:53.192 LINK reconnect 00:04:53.192 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.192 LINK accel_perf 00:04:53.451 CC examples/nvme/arbitration/arbitration.o 00:04:53.451 CXX test/cpp_headers/crc32.o 00:04:53.451 CC examples/nvme/hotplug/hotplug.o 00:04:53.451 CC examples/blob/cli/blobcli.o 00:04:53.451 LINK spdk_nvme_discover 00:04:53.451 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:53.451 CXX test/cpp_headers/crc64.o 00:04:53.451 CC examples/nvme/abort/abort.o 00:04:53.451 LINK arbitration 00:04:53.711 CXX test/cpp_headers/dif.o 00:04:53.711 LINK hotplug 00:04:53.711 LINK cmb_copy 00:04:53.711 CXX test/cpp_headers/dma.o 00:04:53.711 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:53.711 LINK nvme_manage 00:04:53.711 CXX test/cpp_headers/endian.o 00:04:53.711 LINK pmr_persistence 00:04:53.711 CC app/spdk_top/spdk_top.o 00:04:53.972 LINK abort 00:04:53.972 LINK blobcli 00:04:53.972 LINK spdk_nvme_identify 00:04:53.972 CC app/vhost/vhost.o 00:04:53.972 CXX test/cpp_headers/env_dpdk.o 00:04:53.972 CC app/spdk_dd/spdk_dd.o 00:04:53.972 LINK spdk_nvme_perf 00:04:53.972 CC app/fio/nvme/fio_plugin.o 00:04:53.972 CXX test/cpp_headers/env.o 00:04:53.972 LINK vhost 00:04:54.233 CC test/accel/dif/dif.o 00:04:54.233 CC app/fio/bdev/fio_plugin.o 00:04:54.233 CC examples/bdev/hello_world/hello_bdev.o 00:04:54.233 CXX test/cpp_headers/event.o 00:04:54.233 CC examples/bdev/bdevperf/bdevperf.o 00:04:54.233 CC test/blobfs/mkfs/mkfs.o 00:04:54.233 CXX test/cpp_headers/fd_group.o 00:04:54.233 LINK spdk_dd 00:04:54.233 CXX test/cpp_headers/fd.o 00:04:54.233 LINK hello_bdev 00:04:54.495 LINK mkfs 00:04:54.495 CXX test/cpp_headers/file.o 00:04:54.495 LINK spdk_nvme 00:04:54.495 CXX test/cpp_headers/fsdev.o 00:04:54.495 CXX test/cpp_headers/fsdev_module.o 00:04:54.495 CC test/nvme/aer/aer.o 00:04:54.495 CXX test/cpp_headers/ftl.o 00:04:54.495 CC test/lvol/esnap/esnap.o 00:04:54.495 LINK spdk_bdev 00:04:54.756 CC test/nvme/reset/reset.o 00:04:54.756 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.756 CXX test/cpp_headers/gpt_spec.o 00:04:54.756 CXX test/cpp_headers/hexlify.o 00:04:54.756 LINK spdk_top 00:04:54.756 CC test/nvme/sgl/sgl.o 00:04:54.756 CXX test/cpp_headers/histogram_data.o 00:04:54.756 LINK aer 00:04:54.756 CXX test/cpp_headers/idxd.o 00:04:54.756 LINK dif 00:04:54.756 CXX test/cpp_headers/idxd_spec.o 00:04:54.756 LINK reset 00:04:55.017 CXX test/cpp_headers/init.o 00:04:55.017 CC test/nvme/e2edp/nvme_dp.o 00:04:55.017 CXX test/cpp_headers/ioat.o 00:04:55.017 CXX test/cpp_headers/ioat_spec.o 00:04:55.017 CXX test/cpp_headers/iscsi_spec.o 00:04:55.017 CXX test/cpp_headers/json.o 00:04:55.017 CC test/nvme/overhead/overhead.o 00:04:55.017 LINK bdevperf 00:04:55.017 LINK sgl 00:04:55.017 CXX test/cpp_headers/jsonrpc.o 00:04:55.017 CC test/nvme/err_injection/err_injection.o 00:04:55.017 CC test/nvme/startup/startup.o 00:04:55.277 CXX test/cpp_headers/keyring.o 00:04:55.277 LINK nvme_dp 00:04:55.277 CC test/nvme/reserve/reserve.o 00:04:55.277 CC test/nvme/simple_copy/simple_copy.o 00:04:55.277 LINK overhead 00:04:55.277 CXX test/cpp_headers/keyring_module.o 00:04:55.277 LINK startup 00:04:55.277 LINK err_injection 00:04:55.277 CC test/nvme/connect_stress/connect_stress.o 00:04:55.277 CC test/nvme/boot_partition/boot_partition.o 00:04:55.277 CC examples/nvmf/nvmf/nvmf.o 00:04:55.277 LINK reserve 00:04:55.277 CXX test/cpp_headers/likely.o 00:04:55.538 LINK simple_copy 00:04:55.538 LINK boot_partition 00:04:55.538 CC test/nvme/compliance/nvme_compliance.o 00:04:55.538 CXX test/cpp_headers/log.o 00:04:55.538 LINK connect_stress 00:04:55.538 CC test/nvme/fused_ordering/fused_ordering.o 00:04:55.538 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:55.538 CC test/nvme/fdp/fdp.o 00:04:55.538 CC test/bdev/bdevio/bdevio.o 00:04:55.538 CXX test/cpp_headers/lvol.o 00:04:55.538 CXX test/cpp_headers/md5.o 00:04:55.538 LINK nvmf 00:04:55.538 LINK fused_ordering 00:04:55.798 CC test/nvme/cuse/cuse.o 00:04:55.798 LINK doorbell_aers 00:04:55.798 CXX test/cpp_headers/memory.o 00:04:55.798 CXX test/cpp_headers/mmio.o 00:04:55.798 CXX test/cpp_headers/nbd.o 00:04:55.799 CXX test/cpp_headers/net.o 00:04:55.799 LINK nvme_compliance 00:04:55.799 CXX test/cpp_headers/notify.o 00:04:55.799 CXX test/cpp_headers/nvme.o 00:04:55.799 CXX test/cpp_headers/nvme_intel.o 00:04:55.799 CXX test/cpp_headers/nvme_ocssd.o 00:04:55.799 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:55.799 CXX test/cpp_headers/nvme_spec.o 00:04:56.058 LINK fdp 00:04:56.058 CXX test/cpp_headers/nvme_zns.o 00:04:56.058 CXX test/cpp_headers/nvmf_cmd.o 00:04:56.058 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:56.058 CXX test/cpp_headers/nvmf.o 00:04:56.058 LINK bdevio 00:04:56.058 CXX test/cpp_headers/nvmf_spec.o 00:04:56.058 CXX test/cpp_headers/nvmf_transport.o 00:04:56.058 CXX test/cpp_headers/opal.o 00:04:56.058 CXX test/cpp_headers/opal_spec.o 00:04:56.058 CXX test/cpp_headers/pci_ids.o 00:04:56.058 CXX test/cpp_headers/pipe.o 00:04:56.058 CXX test/cpp_headers/queue.o 00:04:56.058 CXX test/cpp_headers/reduce.o 00:04:56.058 CXX test/cpp_headers/rpc.o 00:04:56.058 CXX test/cpp_headers/scheduler.o 00:04:56.058 CXX test/cpp_headers/scsi.o 00:04:56.359 CXX test/cpp_headers/scsi_spec.o 00:04:56.359 CXX test/cpp_headers/sock.o 00:04:56.359 CXX test/cpp_headers/stdinc.o 00:04:56.359 CXX test/cpp_headers/string.o 00:04:56.359 CXX test/cpp_headers/thread.o 00:04:56.359 CXX test/cpp_headers/trace.o 00:04:56.359 CXX test/cpp_headers/trace_parser.o 00:04:56.359 CXX test/cpp_headers/tree.o 00:04:56.359 CXX test/cpp_headers/ublk.o 00:04:56.359 CXX test/cpp_headers/util.o 00:04:56.359 CXX test/cpp_headers/uuid.o 00:04:56.359 CXX test/cpp_headers/version.o 00:04:56.359 CXX test/cpp_headers/vfio_user_pci.o 00:04:56.359 CXX test/cpp_headers/vfio_user_spec.o 00:04:56.359 CXX test/cpp_headers/vhost.o 00:04:56.359 CXX test/cpp_headers/vmd.o 00:04:56.359 CXX test/cpp_headers/xor.o 00:04:56.644 CXX test/cpp_headers/zipf.o 00:04:56.905 LINK cuse 00:04:59.453 LINK esnap 00:04:59.453 00:04:59.453 real 1m1.368s 00:04:59.453 user 5m7.527s 00:04:59.453 sys 0m50.724s 00:04:59.453 11:36:12 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:59.453 ************************************ 00:04:59.453 END TEST make 00:04:59.453 ************************************ 00:04:59.453 11:36:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:59.453 11:36:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:59.453 11:36:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:59.453 11:36:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:59.453 11:36:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.453 11:36:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:59.453 11:36:12 -- pm/common@44 -- $ pid=5785 00:04:59.453 11:36:12 -- pm/common@50 -- $ kill -TERM 5785 00:04:59.453 11:36:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.453 11:36:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:59.453 11:36:12 -- pm/common@44 -- $ pid=5787 00:04:59.453 11:36:12 -- pm/common@50 -- $ kill -TERM 5787 00:04:59.453 11:36:12 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.453 11:36:12 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.453 11:36:12 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.715 11:36:12 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.715 11:36:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.715 11:36:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.715 11:36:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.715 11:36:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.715 11:36:12 -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.715 11:36:12 -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.715 11:36:12 -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.715 11:36:12 -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.715 11:36:12 -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.715 11:36:12 -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.715 11:36:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.715 11:36:12 -- scripts/common.sh@344 -- # case "$op" in 00:04:59.715 11:36:12 -- scripts/common.sh@345 -- # : 1 00:04:59.715 11:36:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.715 11:36:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.715 11:36:12 -- scripts/common.sh@365 -- # decimal 1 00:04:59.715 11:36:12 -- scripts/common.sh@353 -- # local d=1 00:04:59.715 11:36:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.715 11:36:12 -- scripts/common.sh@355 -- # echo 1 00:04:59.715 11:36:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.715 11:36:12 -- scripts/common.sh@366 -- # decimal 2 00:04:59.715 11:36:12 -- scripts/common.sh@353 -- # local d=2 00:04:59.715 11:36:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.715 11:36:12 -- scripts/common.sh@355 -- # echo 2 00:04:59.715 11:36:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.715 11:36:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.715 11:36:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.715 11:36:12 -- scripts/common.sh@368 -- # return 0 00:04:59.715 11:36:12 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.715 11:36:12 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.715 --rc genhtml_branch_coverage=1 00:04:59.715 --rc genhtml_function_coverage=1 00:04:59.715 --rc genhtml_legend=1 00:04:59.715 --rc geninfo_all_blocks=1 00:04:59.715 --rc geninfo_unexecuted_blocks=1 00:04:59.716 00:04:59.716 ' 00:04:59.716 11:36:12 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.716 --rc genhtml_branch_coverage=1 00:04:59.716 --rc genhtml_function_coverage=1 00:04:59.716 --rc genhtml_legend=1 00:04:59.716 --rc geninfo_all_blocks=1 00:04:59.716 --rc geninfo_unexecuted_blocks=1 00:04:59.716 00:04:59.716 ' 00:04:59.716 11:36:12 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.716 --rc genhtml_branch_coverage=1 00:04:59.716 --rc genhtml_function_coverage=1 00:04:59.716 --rc genhtml_legend=1 00:04:59.716 --rc geninfo_all_blocks=1 00:04:59.716 --rc geninfo_unexecuted_blocks=1 00:04:59.716 00:04:59.716 ' 00:04:59.716 11:36:12 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.716 --rc genhtml_branch_coverage=1 00:04:59.716 --rc genhtml_function_coverage=1 00:04:59.716 --rc genhtml_legend=1 00:04:59.716 --rc geninfo_all_blocks=1 00:04:59.716 --rc geninfo_unexecuted_blocks=1 00:04:59.716 00:04:59.716 ' 00:04:59.716 11:36:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.716 11:36:12 -- nvmf/common.sh@7 -- # uname -s 00:04:59.716 11:36:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.716 11:36:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.716 11:36:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.716 11:36:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.716 11:36:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.716 11:36:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.716 11:36:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.716 11:36:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.716 11:36:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.716 11:36:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.716 11:36:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:229814c7-2b97-487f-9cba-d8dde402b6db 00:04:59.716 11:36:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=229814c7-2b97-487f-9cba-d8dde402b6db 00:04:59.716 11:36:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.716 11:36:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.716 11:36:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.716 11:36:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.716 11:36:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.716 11:36:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.716 11:36:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.716 11:36:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.716 11:36:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.716 11:36:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.716 11:36:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.716 11:36:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.716 11:36:12 -- paths/export.sh@5 -- # export PATH 00:04:59.716 11:36:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.716 11:36:12 -- nvmf/common.sh@51 -- # : 0 00:04:59.716 11:36:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.716 11:36:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.716 11:36:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.716 11:36:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.716 11:36:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.716 11:36:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.716 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.716 11:36:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.716 11:36:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.716 11:36:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.716 11:36:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.716 11:36:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.716 11:36:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.716 11:36:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:59.716 11:36:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.716 11:36:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.716 11:36:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.716 11:36:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.716 11:36:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.716 11:36:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.716 11:36:12 -- spdk/autotest.sh@48 -- # udevadm_pid=66557 00:04:59.716 11:36:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.716 11:36:12 -- pm/common@17 -- # local monitor 00:04:59.716 11:36:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.716 11:36:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.716 11:36:12 -- pm/common@25 -- # sleep 1 00:04:59.716 11:36:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.716 11:36:12 -- pm/common@21 -- # date +%s 00:04:59.716 11:36:12 -- pm/common@21 -- # date +%s 00:04:59.716 11:36:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732016172 00:04:59.716 11:36:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732016172 00:04:59.716 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732016172_collect-vmstat.pm.log 00:04:59.716 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732016172_collect-cpu-load.pm.log 00:05:00.661 11:36:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.661 11:36:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.661 11:36:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.661 11:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.661 11:36:13 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.661 11:36:13 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:00.661 11:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.661 11:36:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.661 11:36:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.661 11:36:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.661 11:36:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.661 11:36:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.661 11:36:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.661 11:36:13 -- common/autotest_common.sh@1455 -- # uname 00:05:00.661 11:36:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:00.661 11:36:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.661 11:36:13 -- common/autotest_common.sh@1475 -- # uname 00:05:00.661 11:36:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:00.661 11:36:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:00.661 11:36:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:00.661 lcov: LCOV version 1.15 00:05:00.661 11:36:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.573 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.573 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.578 11:36:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.578 11:36:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.578 11:36:41 -- common/autotest_common.sh@10 -- # set +x 00:05:30.578 11:36:41 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.578 11:36:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.578 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:30.578 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:30.578 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:30.578 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:30.578 11:36:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:30.578 11:36:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:30.578 11:36:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:30.578 11:36:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0c0n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme0c0n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:30.578 11:36:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:30.578 11:36:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:30.578 11:36:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:30.578 11:36:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:30.578 11:36:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.578 11:36:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.578 11:36:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:30.578 11:36:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:30.578 11:36:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.578 No valid GPT data, bailing 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # pt= 00:05:30.578 11:36:42 -- scripts/common.sh@395 -- # return 1 00:05:30.578 11:36:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.578 1+0 records in 00:05:30.578 1+0 records out 00:05:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00299396 s, 350 MB/s 00:05:30.578 11:36:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.578 11:36:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.578 11:36:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:30.578 11:36:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:30.578 11:36:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:30.578 No valid GPT data, bailing 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # pt= 00:05:30.578 11:36:42 -- scripts/common.sh@395 -- # return 1 00:05:30.578 11:36:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:30.578 1+0 records in 00:05:30.578 1+0 records out 00:05:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100207 s, 105 MB/s 00:05:30.578 11:36:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.578 11:36:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.578 11:36:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:30.578 11:36:42 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:30.578 11:36:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:30.578 No valid GPT data, bailing 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # pt= 00:05:30.578 11:36:42 -- scripts/common.sh@395 -- # return 1 00:05:30.578 11:36:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:30.578 1+0 records in 00:05:30.578 1+0 records out 00:05:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043238 s, 243 MB/s 00:05:30.578 11:36:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.578 11:36:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.578 11:36:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:30.578 11:36:42 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:30.578 11:36:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:30.578 No valid GPT data, bailing 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # pt= 00:05:30.578 11:36:42 -- scripts/common.sh@395 -- # return 1 00:05:30.578 11:36:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:30.578 1+0 records in 00:05:30.578 1+0 records out 00:05:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0026005 s, 403 MB/s 00:05:30.578 11:36:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.578 11:36:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.578 11:36:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:30.578 11:36:42 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:30.578 11:36:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:30.578 No valid GPT data, bailing 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:30.578 11:36:42 -- scripts/common.sh@394 -- # pt= 00:05:30.578 11:36:42 -- scripts/common.sh@395 -- # return 1 00:05:30.578 11:36:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:30.578 1+0 records in 00:05:30.578 1+0 records out 00:05:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00298555 s, 351 MB/s 00:05:30.578 11:36:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.579 11:36:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.579 11:36:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:30.579 11:36:42 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:30.579 11:36:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:30.579 No valid GPT data, bailing 00:05:30.579 11:36:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:30.579 11:36:42 -- scripts/common.sh@394 -- # pt= 00:05:30.579 11:36:42 -- scripts/common.sh@395 -- # return 1 00:05:30.579 11:36:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:30.579 1+0 records in 00:05:30.579 1+0 records out 00:05:30.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.003187 s, 329 MB/s 00:05:30.579 11:36:42 -- spdk/autotest.sh@105 -- # sync 00:05:30.579 11:36:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:30.579 11:36:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:30.579 11:36:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:31.143 11:36:44 -- spdk/autotest.sh@111 -- # uname -s 00:05:31.143 11:36:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:31.143 11:36:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:31.143 11:36:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.965 Hugepages 00:05:31.965 node hugesize free / total 00:05:31.965 node0 1048576kB 0 / 0 00:05:31.965 node0 2048kB 0 / 0 00:05:31.965 00:05:31.965 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.965 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:31.965 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:31.965 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:31.965 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:32.222 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:32.222 11:36:45 -- spdk/autotest.sh@117 -- # uname -s 00:05:32.222 11:36:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:32.222 11:36:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:32.222 11:36:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.045 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.045 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.045 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.045 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.045 11:36:46 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:33.978 11:36:47 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:33.978 11:36:47 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:33.978 11:36:47 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:33.978 11:36:47 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:33.978 11:36:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:33.978 11:36:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:33.978 11:36:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.978 11:36:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:33.978 11:36:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:34.235 11:36:47 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:34.235 11:36:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:34.235 11:36:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.492 Waiting for block devices as requested 00:05:34.755 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.755 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.755 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.755 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.034 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:40.034 11:36:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:40.034 11:36:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:40.034 11:36:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:40.034 11:36:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:40.034 11:36:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:40.034 11:36:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:40.034 11:36:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:40.034 11:36:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:40.034 11:36:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1541 -- # continue 00:05:40.034 11:36:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:40.034 11:36:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:40.034 11:36:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:40.034 11:36:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:40.034 11:36:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1541 -- # continue 00:05:40.034 11:36:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:40.034 11:36:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:40.034 11:36:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:40.034 11:36:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:40.034 11:36:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:40.034 11:36:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:40.035 11:36:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:40.035 11:36:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:40.035 11:36:53 -- common/autotest_common.sh@1541 -- # continue 00:05:40.035 11:36:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:40.035 11:36:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:40.035 11:36:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:40.035 11:36:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:40.035 11:36:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:40.035 11:36:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:40.035 11:36:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:40.035 11:36:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:40.035 11:36:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:40.035 11:36:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:40.035 11:36:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:40.035 11:36:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:40.035 11:36:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:40.035 11:36:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:40.035 11:36:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:40.035 11:36:53 -- common/autotest_common.sh@1541 -- # continue 00:05:40.035 11:36:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:40.035 11:36:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.035 11:36:53 -- common/autotest_common.sh@10 -- # set +x 00:05:40.035 11:36:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:40.035 11:36:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.035 11:36:53 -- common/autotest_common.sh@10 -- # set +x 00:05:40.035 11:36:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.176 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.176 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.176 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.176 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.176 11:36:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:41.176 11:36:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.176 11:36:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.176 11:36:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:41.176 11:36:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:41.176 11:36:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:41.176 11:36:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:41.176 11:36:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:41.176 11:36:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:41.176 11:36:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:41.176 11:36:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:41.176 11:36:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:41.176 11:36:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:41.176 11:36:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.176 11:36:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.176 11:36:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:41.438 11:36:54 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:41.438 11:36:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:41.438 11:36:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:41.438 11:36:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:41.438 11:36:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:41.438 11:36:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:41.438 11:36:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:41.438 11:36:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:41.438 11:36:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:41.438 11:36:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:41.438 11:36:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:41.438 11:36:54 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:41.438 11:36:54 -- common/autotest_common.sh@1570 -- # return 0 00:05:41.438 11:36:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:41.438 11:36:54 -- common/autotest_common.sh@1578 -- # return 0 00:05:41.438 11:36:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:41.438 11:36:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:41.438 11:36:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.438 11:36:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.438 11:36:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:41.438 11:36:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.438 11:36:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.438 11:36:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:41.438 11:36:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:41.438 11:36:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.438 11:36:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.438 11:36:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.438 ************************************ 00:05:41.438 START TEST env 00:05:41.438 ************************************ 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:41.438 * Looking for test storage... 00:05:41.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.438 11:36:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.438 11:36:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.438 11:36:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.438 11:36:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.438 11:36:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.438 11:36:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.438 11:36:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.438 11:36:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.438 11:36:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.438 11:36:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.438 11:36:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.438 11:36:54 env -- scripts/common.sh@344 -- # case "$op" in 00:05:41.438 11:36:54 env -- scripts/common.sh@345 -- # : 1 00:05:41.438 11:36:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.438 11:36:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.438 11:36:54 env -- scripts/common.sh@365 -- # decimal 1 00:05:41.438 11:36:54 env -- scripts/common.sh@353 -- # local d=1 00:05:41.438 11:36:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.438 11:36:54 env -- scripts/common.sh@355 -- # echo 1 00:05:41.438 11:36:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.438 11:36:54 env -- scripts/common.sh@366 -- # decimal 2 00:05:41.438 11:36:54 env -- scripts/common.sh@353 -- # local d=2 00:05:41.438 11:36:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.438 11:36:54 env -- scripts/common.sh@355 -- # echo 2 00:05:41.438 11:36:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.438 11:36:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.438 11:36:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.438 11:36:54 env -- scripts/common.sh@368 -- # return 0 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.438 --rc genhtml_branch_coverage=1 00:05:41.438 --rc genhtml_function_coverage=1 00:05:41.438 --rc genhtml_legend=1 00:05:41.438 --rc geninfo_all_blocks=1 00:05:41.438 --rc geninfo_unexecuted_blocks=1 00:05:41.438 00:05:41.438 ' 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.438 --rc genhtml_branch_coverage=1 00:05:41.438 --rc genhtml_function_coverage=1 00:05:41.438 --rc genhtml_legend=1 00:05:41.438 --rc geninfo_all_blocks=1 00:05:41.438 --rc geninfo_unexecuted_blocks=1 00:05:41.438 00:05:41.438 ' 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.438 --rc genhtml_branch_coverage=1 00:05:41.438 --rc genhtml_function_coverage=1 00:05:41.438 --rc genhtml_legend=1 00:05:41.438 --rc geninfo_all_blocks=1 00:05:41.438 --rc geninfo_unexecuted_blocks=1 00:05:41.438 00:05:41.438 ' 00:05:41.438 11:36:54 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.438 --rc genhtml_branch_coverage=1 00:05:41.438 --rc genhtml_function_coverage=1 00:05:41.438 --rc genhtml_legend=1 00:05:41.438 --rc geninfo_all_blocks=1 00:05:41.438 --rc geninfo_unexecuted_blocks=1 00:05:41.438 00:05:41.438 ' 00:05:41.439 11:36:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:41.439 11:36:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.439 11:36:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.439 11:36:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.439 ************************************ 00:05:41.439 START TEST env_memory 00:05:41.439 ************************************ 00:05:41.439 11:36:54 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:41.700 00:05:41.701 00:05:41.701 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.701 http://cunit.sourceforge.net/ 00:05:41.701 00:05:41.701 00:05:41.701 Suite: memory 00:05:41.701 Test: alloc and free memory map ...[2024-11-19 11:36:54.888466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:41.701 passed 00:05:41.701 Test: mem map translation ...[2024-11-19 11:36:54.927538] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.701 [2024-11-19 11:36:54.927674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.701 [2024-11-19 11:36:54.927792] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.701 [2024-11-19 11:36:54.927832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:41.701 passed 00:05:41.701 Test: mem map registration ...[2024-11-19 11:36:54.996341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:41.701 [2024-11-19 11:36:54.996521] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:41.701 passed 00:05:41.701 Test: mem map adjacent registrations ...passed 00:05:41.701 00:05:41.701 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.701 suites 1 1 n/a 0 0 00:05:41.701 tests 4 4 4 0 0 00:05:41.701 asserts 152 152 152 0 n/a 00:05:41.701 00:05:41.701 Elapsed time = 0.233 seconds 00:05:41.701 00:05:41.701 real 0m0.273s 00:05:41.701 user 0m0.244s 00:05:41.701 sys 0m0.021s 00:05:41.701 11:36:55 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.701 11:36:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:41.701 ************************************ 00:05:41.701 END TEST env_memory 00:05:41.701 ************************************ 00:05:41.962 11:36:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.962 11:36:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.962 11:36:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.962 11:36:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.962 ************************************ 00:05:41.962 START TEST env_vtophys 00:05:41.962 ************************************ 00:05:41.962 11:36:55 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.962 EAL: lib.eal log level changed from notice to debug 00:05:41.962 EAL: Detected lcore 0 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 1 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 2 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 3 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 4 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 5 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 6 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 7 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 8 as core 0 on socket 0 00:05:41.962 EAL: Detected lcore 9 as core 0 on socket 0 00:05:41.962 EAL: Maximum logical cores by configuration: 128 00:05:41.962 EAL: Detected CPU lcores: 10 00:05:41.962 EAL: Detected NUMA nodes: 1 00:05:41.962 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:41.962 EAL: Detected shared linkage of DPDK 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:41.962 EAL: Registered [vdev] bus. 00:05:41.962 EAL: bus.vdev log level changed from disabled to notice 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:41.962 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:41.962 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:41.962 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:41.962 EAL: No shared files mode enabled, IPC will be disabled 00:05:41.962 EAL: No shared files mode enabled, IPC is disabled 00:05:41.962 EAL: Selected IOVA mode 'PA' 00:05:41.962 EAL: Probing VFIO support... 00:05:41.962 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.962 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:41.962 EAL: Ask a virtual area of 0x2e000 bytes 00:05:41.962 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:41.962 EAL: Setting up physically contiguous memory... 00:05:41.962 EAL: Setting maximum number of open files to 524288 00:05:41.962 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:41.962 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:41.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.962 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:41.962 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.962 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:41.962 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:41.962 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.962 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:41.962 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.962 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.962 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:41.963 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:41.963 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.963 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:41.963 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.963 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.963 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:41.963 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:41.963 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.963 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:41.963 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.963 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.963 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:41.963 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:41.963 EAL: Hugepages will be freed exactly as allocated. 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: TSC frequency is ~2600000 KHz 00:05:41.963 EAL: Main lcore 0 is ready (tid=7f305527da40;cpuset=[0]) 00:05:41.963 EAL: Trying to obtain current memory policy. 00:05:41.963 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.963 EAL: Restoring previous memory policy: 0 00:05:41.963 EAL: request: mp_malloc_sync 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: Heap on socket 0 was expanded by 2MB 00:05:41.963 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.963 EAL: No shared files mode enabled, IPC is disabled 00:05:41.963 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:41.963 EAL: Mem event callback 'spdk:(nil)' registered 00:05:41.963 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:41.963 00:05:41.963 00:05:41.963 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.963 http://cunit.sourceforge.net/ 00:05:41.963 00:05:41.963 00:05:41.963 Suite: components_suite 00:05:42.536 Test: vtophys_malloc_test ...passed 00:05:42.536 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.536 EAL: Trying to obtain current memory policy. 00:05:42.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.536 EAL: Restoring previous memory policy: 4 00:05:42.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.536 EAL: request: mp_malloc_sync 00:05:42.536 EAL: No shared files mode enabled, IPC is disabled 00:05:42.536 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.537 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.537 EAL: request: mp_malloc_sync 00:05:42.537 EAL: No shared files mode enabled, IPC is disabled 00:05:42.537 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.537 EAL: Trying to obtain current memory policy. 00:05:42.537 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.797 EAL: Restoring previous memory policy: 4 00:05:42.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.797 EAL: request: mp_malloc_sync 00:05:42.797 EAL: No shared files mode enabled, IPC is disabled 00:05:42.797 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.797 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.797 EAL: request: mp_malloc_sync 00:05:42.797 EAL: No shared files mode enabled, IPC is disabled 00:05:42.797 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.797 EAL: Trying to obtain current memory policy. 00:05:42.797 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.058 EAL: Restoring previous memory policy: 4 00:05:43.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.058 EAL: request: mp_malloc_sync 00:05:43.058 EAL: No shared files mode enabled, IPC is disabled 00:05:43.058 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.319 passed 00:05:43.319 00:05:43.319 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.319 suites 1 1 n/a 0 0 00:05:43.319 tests 2 2 2 0 0 00:05:43.319 asserts 5463 5463 5463 0 n/a 00:05:43.319 00:05:43.319 Elapsed time = 1.123 seconds 00:05:43.319 EAL: request: mp_malloc_sync 00:05:43.319 EAL: No shared files mode enabled, IPC is disabled 00:05:43.319 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.319 EAL: request: mp_malloc_sync 00:05:43.319 EAL: No shared files mode enabled, IPC is disabled 00:05:43.319 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.319 EAL: No shared files mode enabled, IPC is disabled 00:05:43.319 EAL: No shared files mode enabled, IPC is disabled 00:05:43.319 EAL: No shared files mode enabled, IPC is disabled 00:05:43.319 00:05:43.319 real 0m1.343s 00:05:43.319 user 0m0.532s 00:05:43.319 sys 0m0.669s 00:05:43.319 11:36:56 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.319 11:36:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.319 ************************************ 00:05:43.319 END TEST env_vtophys 00:05:43.319 ************************************ 00:05:43.319 11:36:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.319 11:36:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.319 11:36:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.319 11:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.319 ************************************ 00:05:43.319 START TEST env_pci 00:05:43.319 ************************************ 00:05:43.320 11:36:56 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.320 00:05:43.320 00:05:43.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.320 http://cunit.sourceforge.net/ 00:05:43.320 00:05:43.320 00:05:43.320 Suite: pci 00:05:43.320 Test: pci_hook ...[2024-11-19 11:36:56.577606] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69274 has claimed it 00:05:43.320 EAL: Cannot find device (10000:00:01.0) 00:05:43.320 EAL: Failed to attach device on primary process 00:05:43.320 passed 00:05:43.320 00:05:43.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.320 suites 1 1 n/a 0 0 00:05:43.320 tests 1 1 1 0 0 00:05:43.320 asserts 25 25 25 0 n/a 00:05:43.320 00:05:43.320 Elapsed time = 0.005 seconds 00:05:43.320 00:05:43.320 real 0m0.057s 00:05:43.320 user 0m0.020s 00:05:43.320 sys 0m0.035s 00:05:43.320 11:36:56 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.320 11:36:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.320 ************************************ 00:05:43.320 END TEST env_pci 00:05:43.320 ************************************ 00:05:43.320 11:36:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.320 11:36:56 env -- env/env.sh@15 -- # uname 00:05:43.320 11:36:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.320 11:36:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.320 11:36:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.320 11:36:56 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:43.320 11:36:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.320 11:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.320 ************************************ 00:05:43.320 START TEST env_dpdk_post_init 00:05:43.320 ************************************ 00:05:43.320 11:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.320 EAL: Detected CPU lcores: 10 00:05:43.320 EAL: Detected NUMA nodes: 1 00:05:43.320 EAL: Detected shared linkage of DPDK 00:05:43.320 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.320 EAL: Selected IOVA mode 'PA' 00:05:43.581 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.581 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:43.581 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:43.581 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:43.581 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:43.581 Starting DPDK initialization... 00:05:43.581 Starting SPDK post initialization... 00:05:43.581 SPDK NVMe probe 00:05:43.581 Attaching to 0000:00:10.0 00:05:43.581 Attaching to 0000:00:11.0 00:05:43.581 Attaching to 0000:00:12.0 00:05:43.581 Attaching to 0000:00:13.0 00:05:43.581 Attached to 0000:00:13.0 00:05:43.581 Attached to 0000:00:10.0 00:05:43.581 Attached to 0000:00:11.0 00:05:43.581 Attached to 0000:00:12.0 00:05:43.581 Cleaning up... 00:05:43.581 00:05:43.581 real 0m0.208s 00:05:43.581 user 0m0.060s 00:05:43.581 sys 0m0.049s 00:05:43.581 ************************************ 00:05:43.581 END TEST env_dpdk_post_init 00:05:43.581 ************************************ 00:05:43.581 11:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.581 11:36:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.581 11:36:56 env -- env/env.sh@26 -- # uname 00:05:43.581 11:36:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.581 11:36:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.581 11:36:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.581 11:36:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.582 11:36:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.582 ************************************ 00:05:43.582 START TEST env_mem_callbacks 00:05:43.582 ************************************ 00:05:43.582 11:36:56 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.582 EAL: Detected CPU lcores: 10 00:05:43.582 EAL: Detected NUMA nodes: 1 00:05:43.582 EAL: Detected shared linkage of DPDK 00:05:43.582 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.582 EAL: Selected IOVA mode 'PA' 00:05:43.903 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.903 00:05:43.903 00:05:43.903 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.903 http://cunit.sourceforge.net/ 00:05:43.903 00:05:43.903 00:05:43.903 Suite: memory 00:05:43.903 Test: test ... 00:05:43.903 register 0x200000200000 2097152 00:05:43.903 malloc 3145728 00:05:43.903 register 0x200000400000 4194304 00:05:43.903 buf 0x200000500000 len 3145728 PASSED 00:05:43.903 malloc 64 00:05:43.903 buf 0x2000004fff40 len 64 PASSED 00:05:43.903 malloc 4194304 00:05:43.903 register 0x200000800000 6291456 00:05:43.903 buf 0x200000a00000 len 4194304 PASSED 00:05:43.903 free 0x200000500000 3145728 00:05:43.903 free 0x2000004fff40 64 00:05:43.903 unregister 0x200000400000 4194304 PASSED 00:05:43.903 free 0x200000a00000 4194304 00:05:43.903 unregister 0x200000800000 6291456 PASSED 00:05:43.903 malloc 8388608 00:05:43.903 register 0x200000400000 10485760 00:05:43.903 buf 0x200000600000 len 8388608 PASSED 00:05:43.903 free 0x200000600000 8388608 00:05:43.903 unregister 0x200000400000 10485760 PASSED 00:05:43.903 passed 00:05:43.903 00:05:43.903 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.903 suites 1 1 n/a 0 0 00:05:43.903 tests 1 1 1 0 0 00:05:43.903 asserts 15 15 15 0 n/a 00:05:43.903 00:05:43.903 Elapsed time = 0.010 seconds 00:05:43.903 00:05:43.903 real 0m0.150s 00:05:43.903 user 0m0.018s 00:05:43.903 sys 0m0.031s 00:05:43.903 11:36:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.903 ************************************ 00:05:43.903 END TEST env_mem_callbacks 00:05:43.903 ************************************ 00:05:43.903 11:36:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 00:05:43.903 real 0m2.453s 00:05:43.903 user 0m1.039s 00:05:43.903 sys 0m1.008s 00:05:43.903 11:36:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.903 11:36:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 ************************************ 00:05:43.903 END TEST env 00:05:43.903 ************************************ 00:05:43.903 11:36:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.903 11:36:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.903 11:36:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.903 11:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 ************************************ 00:05:43.903 START TEST rpc 00:05:43.903 ************************************ 00:05:43.903 11:36:57 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.903 * Looking for test storage... 00:05:43.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.903 11:36:57 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:43.903 11:36:57 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:43.903 11:36:57 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:43.903 11:36:57 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:43.903 11:36:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.903 11:36:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.903 11:36:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.903 11:36:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.903 11:36:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.903 11:36:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.903 11:36:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.903 11:36:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.903 11:36:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.903 11:36:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.903 11:36:57 rpc -- scripts/common.sh@345 -- # : 1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.903 11:36:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.903 11:36:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.903 11:36:57 rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.903 11:36:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.184 11:36:57 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.184 11:36:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.184 11:36:57 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.184 11:36:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.184 11:36:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.184 11:36:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.184 11:36:57 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.184 --rc genhtml_branch_coverage=1 00:05:44.184 --rc genhtml_function_coverage=1 00:05:44.184 --rc genhtml_legend=1 00:05:44.184 --rc geninfo_all_blocks=1 00:05:44.184 --rc geninfo_unexecuted_blocks=1 00:05:44.184 00:05:44.184 ' 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.184 --rc genhtml_branch_coverage=1 00:05:44.184 --rc genhtml_function_coverage=1 00:05:44.184 --rc genhtml_legend=1 00:05:44.184 --rc geninfo_all_blocks=1 00:05:44.184 --rc geninfo_unexecuted_blocks=1 00:05:44.184 00:05:44.184 ' 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.184 --rc genhtml_branch_coverage=1 00:05:44.184 --rc genhtml_function_coverage=1 00:05:44.184 --rc genhtml_legend=1 00:05:44.184 --rc geninfo_all_blocks=1 00:05:44.184 --rc geninfo_unexecuted_blocks=1 00:05:44.184 00:05:44.184 ' 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.184 --rc genhtml_branch_coverage=1 00:05:44.184 --rc genhtml_function_coverage=1 00:05:44.184 --rc genhtml_legend=1 00:05:44.184 --rc geninfo_all_blocks=1 00:05:44.184 --rc geninfo_unexecuted_blocks=1 00:05:44.184 00:05:44.184 ' 00:05:44.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.184 11:36:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69401 00:05:44.184 11:36:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.184 11:36:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69401 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@831 -- # '[' -z 69401 ']' 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.184 11:36:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.184 11:36:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.184 [2024-11-19 11:36:57.353423] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:44.184 [2024-11-19 11:36:57.353543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69401 ] 00:05:44.184 [2024-11-19 11:36:57.489441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.184 [2024-11-19 11:36:57.521450] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.184 [2024-11-19 11:36:57.521500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69401' to capture a snapshot of events at runtime. 00:05:44.184 [2024-11-19 11:36:57.521513] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.185 [2024-11-19 11:36:57.521521] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.185 [2024-11-19 11:36:57.521534] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69401 for offline analysis/debug. 00:05:44.185 [2024-11-19 11:36:57.521562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.129 11:36:58 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.129 11:36:58 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.129 11:36:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.129 11:36:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.129 11:36:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.129 11:36:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.129 11:36:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.129 11:36:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.129 11:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 ************************************ 00:05:45.129 START TEST rpc_integrity 00:05:45.129 ************************************ 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.129 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.129 { 00:05:45.129 "name": "Malloc0", 00:05:45.129 "aliases": [ 00:05:45.129 "20b651b1-2c8c-4342-b820-ad4f001572f6" 00:05:45.129 ], 00:05:45.129 "product_name": "Malloc disk", 00:05:45.129 "block_size": 512, 00:05:45.129 "num_blocks": 16384, 00:05:45.129 "uuid": "20b651b1-2c8c-4342-b820-ad4f001572f6", 00:05:45.129 "assigned_rate_limits": { 00:05:45.129 "rw_ios_per_sec": 0, 00:05:45.129 "rw_mbytes_per_sec": 0, 00:05:45.129 "r_mbytes_per_sec": 0, 00:05:45.129 "w_mbytes_per_sec": 0 00:05:45.129 }, 00:05:45.129 "claimed": false, 00:05:45.129 "zoned": false, 00:05:45.129 "supported_io_types": { 00:05:45.129 "read": true, 00:05:45.129 "write": true, 00:05:45.129 "unmap": true, 00:05:45.129 "flush": true, 00:05:45.129 "reset": true, 00:05:45.129 "nvme_admin": false, 00:05:45.129 "nvme_io": false, 00:05:45.129 "nvme_io_md": false, 00:05:45.129 "write_zeroes": true, 00:05:45.129 "zcopy": true, 00:05:45.130 "get_zone_info": false, 00:05:45.130 "zone_management": false, 00:05:45.130 "zone_append": false, 00:05:45.130 "compare": false, 00:05:45.130 "compare_and_write": false, 00:05:45.130 "abort": true, 00:05:45.130 "seek_hole": false, 00:05:45.130 "seek_data": false, 00:05:45.130 "copy": true, 00:05:45.130 "nvme_iov_md": false 00:05:45.130 }, 00:05:45.130 "memory_domains": [ 00:05:45.130 { 00:05:45.130 "dma_device_id": "system", 00:05:45.130 "dma_device_type": 1 00:05:45.130 }, 00:05:45.130 { 00:05:45.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.130 "dma_device_type": 2 00:05:45.130 } 00:05:45.130 ], 00:05:45.130 "driver_specific": {} 00:05:45.130 } 00:05:45.130 ]' 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 [2024-11-19 11:36:58.291344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.130 [2024-11-19 11:36:58.291401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.130 [2024-11-19 11:36:58.291439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:45.130 [2024-11-19 11:36:58.291448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.130 [2024-11-19 11:36:58.293644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.130 [2024-11-19 11:36:58.293675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.130 Passthru0 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.130 { 00:05:45.130 "name": "Malloc0", 00:05:45.130 "aliases": [ 00:05:45.130 "20b651b1-2c8c-4342-b820-ad4f001572f6" 00:05:45.130 ], 00:05:45.130 "product_name": "Malloc disk", 00:05:45.130 "block_size": 512, 00:05:45.130 "num_blocks": 16384, 00:05:45.130 "uuid": "20b651b1-2c8c-4342-b820-ad4f001572f6", 00:05:45.130 "assigned_rate_limits": { 00:05:45.130 "rw_ios_per_sec": 0, 00:05:45.130 "rw_mbytes_per_sec": 0, 00:05:45.130 "r_mbytes_per_sec": 0, 00:05:45.130 "w_mbytes_per_sec": 0 00:05:45.130 }, 00:05:45.130 "claimed": true, 00:05:45.130 "claim_type": "exclusive_write", 00:05:45.130 "zoned": false, 00:05:45.130 "supported_io_types": { 00:05:45.130 "read": true, 00:05:45.130 "write": true, 00:05:45.130 "unmap": true, 00:05:45.130 "flush": true, 00:05:45.130 "reset": true, 00:05:45.130 "nvme_admin": false, 00:05:45.130 "nvme_io": false, 00:05:45.130 "nvme_io_md": false, 00:05:45.130 "write_zeroes": true, 00:05:45.130 "zcopy": true, 00:05:45.130 "get_zone_info": false, 00:05:45.130 "zone_management": false, 00:05:45.130 "zone_append": false, 00:05:45.130 "compare": false, 00:05:45.130 "compare_and_write": false, 00:05:45.130 "abort": true, 00:05:45.130 "seek_hole": false, 00:05:45.130 "seek_data": false, 00:05:45.130 "copy": true, 00:05:45.130 "nvme_iov_md": false 00:05:45.130 }, 00:05:45.130 "memory_domains": [ 00:05:45.130 { 00:05:45.130 "dma_device_id": "system", 00:05:45.130 "dma_device_type": 1 00:05:45.130 }, 00:05:45.130 { 00:05:45.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.130 "dma_device_type": 2 00:05:45.130 } 00:05:45.130 ], 00:05:45.130 "driver_specific": {} 00:05:45.130 }, 00:05:45.130 { 00:05:45.130 "name": "Passthru0", 00:05:45.130 "aliases": [ 00:05:45.130 "2f5933a3-8897-5036-ba34-d2ab539f2d03" 00:05:45.130 ], 00:05:45.130 "product_name": "passthru", 00:05:45.130 "block_size": 512, 00:05:45.130 "num_blocks": 16384, 00:05:45.130 "uuid": "2f5933a3-8897-5036-ba34-d2ab539f2d03", 00:05:45.130 "assigned_rate_limits": { 00:05:45.130 "rw_ios_per_sec": 0, 00:05:45.130 "rw_mbytes_per_sec": 0, 00:05:45.130 "r_mbytes_per_sec": 0, 00:05:45.130 "w_mbytes_per_sec": 0 00:05:45.130 }, 00:05:45.130 "claimed": false, 00:05:45.130 "zoned": false, 00:05:45.130 "supported_io_types": { 00:05:45.130 "read": true, 00:05:45.130 "write": true, 00:05:45.130 "unmap": true, 00:05:45.130 "flush": true, 00:05:45.130 "reset": true, 00:05:45.130 "nvme_admin": false, 00:05:45.130 "nvme_io": false, 00:05:45.130 "nvme_io_md": false, 00:05:45.130 "write_zeroes": true, 00:05:45.130 "zcopy": true, 00:05:45.130 "get_zone_info": false, 00:05:45.130 "zone_management": false, 00:05:45.130 "zone_append": false, 00:05:45.130 "compare": false, 00:05:45.130 "compare_and_write": false, 00:05:45.130 "abort": true, 00:05:45.130 "seek_hole": false, 00:05:45.130 "seek_data": false, 00:05:45.130 "copy": true, 00:05:45.130 "nvme_iov_md": false 00:05:45.130 }, 00:05:45.130 "memory_domains": [ 00:05:45.130 { 00:05:45.130 "dma_device_id": "system", 00:05:45.130 "dma_device_type": 1 00:05:45.130 }, 00:05:45.130 { 00:05:45.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.130 "dma_device_type": 2 00:05:45.130 } 00:05:45.130 ], 00:05:45.130 "driver_specific": { 00:05:45.130 "passthru": { 00:05:45.130 "name": "Passthru0", 00:05:45.130 "base_bdev_name": "Malloc0" 00:05:45.130 } 00:05:45.130 } 00:05:45.130 } 00:05:45.130 ]' 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.130 11:36:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.130 00:05:45.130 real 0m0.220s 00:05:45.130 user 0m0.130s 00:05:45.130 sys 0m0.028s 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 ************************************ 00:05:45.130 END TEST rpc_integrity 00:05:45.130 ************************************ 00:05:45.130 11:36:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.130 11:36:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.130 11:36:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.130 11:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 ************************************ 00:05:45.130 START TEST rpc_plugins 00:05:45.130 ************************************ 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:45.130 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.130 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.130 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.130 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.130 { 00:05:45.130 "name": "Malloc1", 00:05:45.130 "aliases": [ 00:05:45.130 "8df55a8e-1e37-4eed-8e61-7c045e26abeb" 00:05:45.130 ], 00:05:45.130 "product_name": "Malloc disk", 00:05:45.130 "block_size": 4096, 00:05:45.130 "num_blocks": 256, 00:05:45.130 "uuid": "8df55a8e-1e37-4eed-8e61-7c045e26abeb", 00:05:45.130 "assigned_rate_limits": { 00:05:45.130 "rw_ios_per_sec": 0, 00:05:45.130 "rw_mbytes_per_sec": 0, 00:05:45.130 "r_mbytes_per_sec": 0, 00:05:45.130 "w_mbytes_per_sec": 0 00:05:45.130 }, 00:05:45.130 "claimed": false, 00:05:45.130 "zoned": false, 00:05:45.130 "supported_io_types": { 00:05:45.130 "read": true, 00:05:45.130 "write": true, 00:05:45.130 "unmap": true, 00:05:45.130 "flush": true, 00:05:45.130 "reset": true, 00:05:45.130 "nvme_admin": false, 00:05:45.130 "nvme_io": false, 00:05:45.130 "nvme_io_md": false, 00:05:45.130 "write_zeroes": true, 00:05:45.130 "zcopy": true, 00:05:45.131 "get_zone_info": false, 00:05:45.131 "zone_management": false, 00:05:45.131 "zone_append": false, 00:05:45.131 "compare": false, 00:05:45.131 "compare_and_write": false, 00:05:45.131 "abort": true, 00:05:45.131 "seek_hole": false, 00:05:45.131 "seek_data": false, 00:05:45.131 "copy": true, 00:05:45.131 "nvme_iov_md": false 00:05:45.131 }, 00:05:45.131 "memory_domains": [ 00:05:45.131 { 00:05:45.131 "dma_device_id": "system", 00:05:45.131 "dma_device_type": 1 00:05:45.131 }, 00:05:45.131 { 00:05:45.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.131 "dma_device_type": 2 00:05:45.131 } 00:05:45.131 ], 00:05:45.131 "driver_specific": {} 00:05:45.131 } 00:05:45.131 ]' 00:05:45.131 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.131 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.131 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.131 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.131 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.131 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.131 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.131 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.131 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.131 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.131 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.131 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.392 11:36:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.392 00:05:45.392 real 0m0.116s 00:05:45.392 user 0m0.063s 00:05:45.392 sys 0m0.017s 00:05:45.392 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.392 11:36:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.392 ************************************ 00:05:45.392 END TEST rpc_plugins 00:05:45.392 ************************************ 00:05:45.392 11:36:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.392 11:36:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.392 11:36:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.392 11:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.392 ************************************ 00:05:45.392 START TEST rpc_trace_cmd_test 00:05:45.392 ************************************ 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.392 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69401", 00:05:45.392 "tpoint_group_mask": "0x8", 00:05:45.392 "iscsi_conn": { 00:05:45.392 "mask": "0x2", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "scsi": { 00:05:45.392 "mask": "0x4", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "bdev": { 00:05:45.392 "mask": "0x8", 00:05:45.392 "tpoint_mask": "0xffffffffffffffff" 00:05:45.392 }, 00:05:45.392 "nvmf_rdma": { 00:05:45.392 "mask": "0x10", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "nvmf_tcp": { 00:05:45.392 "mask": "0x20", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "ftl": { 00:05:45.392 "mask": "0x40", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "blobfs": { 00:05:45.392 "mask": "0x80", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "dsa": { 00:05:45.392 "mask": "0x200", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "thread": { 00:05:45.392 "mask": "0x400", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "nvme_pcie": { 00:05:45.392 "mask": "0x800", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "iaa": { 00:05:45.392 "mask": "0x1000", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "nvme_tcp": { 00:05:45.392 "mask": "0x2000", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "bdev_nvme": { 00:05:45.392 "mask": "0x4000", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "sock": { 00:05:45.392 "mask": "0x8000", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "blob": { 00:05:45.392 "mask": "0x10000", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 }, 00:05:45.392 "bdev_raid": { 00:05:45.392 "mask": "0x20000", 00:05:45.392 "tpoint_mask": "0x0" 00:05:45.392 } 00:05:45.392 }' 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.392 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.393 00:05:45.393 real 0m0.181s 00:05:45.393 user 0m0.148s 00:05:45.393 sys 0m0.023s 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.393 11:36:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.393 ************************************ 00:05:45.393 END TEST rpc_trace_cmd_test 00:05:45.393 ************************************ 00:05:45.654 11:36:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.654 11:36:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.654 11:36:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.654 11:36:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.654 11:36:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.654 11:36:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.654 ************************************ 00:05:45.654 START TEST rpc_daemon_integrity 00:05:45.654 ************************************ 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.654 { 00:05:45.654 "name": "Malloc2", 00:05:45.654 "aliases": [ 00:05:45.654 "df35f7ce-7957-44a1-b57c-f6c44eafc5d3" 00:05:45.654 ], 00:05:45.654 "product_name": "Malloc disk", 00:05:45.654 "block_size": 512, 00:05:45.654 "num_blocks": 16384, 00:05:45.654 "uuid": "df35f7ce-7957-44a1-b57c-f6c44eafc5d3", 00:05:45.654 "assigned_rate_limits": { 00:05:45.654 "rw_ios_per_sec": 0, 00:05:45.654 "rw_mbytes_per_sec": 0, 00:05:45.654 "r_mbytes_per_sec": 0, 00:05:45.654 "w_mbytes_per_sec": 0 00:05:45.654 }, 00:05:45.654 "claimed": false, 00:05:45.654 "zoned": false, 00:05:45.654 "supported_io_types": { 00:05:45.654 "read": true, 00:05:45.654 "write": true, 00:05:45.654 "unmap": true, 00:05:45.654 "flush": true, 00:05:45.654 "reset": true, 00:05:45.654 "nvme_admin": false, 00:05:45.654 "nvme_io": false, 00:05:45.654 "nvme_io_md": false, 00:05:45.654 "write_zeroes": true, 00:05:45.654 "zcopy": true, 00:05:45.654 "get_zone_info": false, 00:05:45.654 "zone_management": false, 00:05:45.654 "zone_append": false, 00:05:45.654 "compare": false, 00:05:45.654 "compare_and_write": false, 00:05:45.654 "abort": true, 00:05:45.654 "seek_hole": false, 00:05:45.654 "seek_data": false, 00:05:45.654 "copy": true, 00:05:45.654 "nvme_iov_md": false 00:05:45.654 }, 00:05:45.654 "memory_domains": [ 00:05:45.654 { 00:05:45.654 "dma_device_id": "system", 00:05:45.654 "dma_device_type": 1 00:05:45.654 }, 00:05:45.654 { 00:05:45.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.654 "dma_device_type": 2 00:05:45.654 } 00:05:45.654 ], 00:05:45.654 "driver_specific": {} 00:05:45.654 } 00:05:45.654 ]' 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.654 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 [2024-11-19 11:36:58.919747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:45.655 [2024-11-19 11:36:58.919807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.655 [2024-11-19 11:36:58.919830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:45.655 [2024-11-19 11:36:58.919839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.655 [2024-11-19 11:36:58.922024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.655 [2024-11-19 11:36:58.922054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.655 Passthru0 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.655 { 00:05:45.655 "name": "Malloc2", 00:05:45.655 "aliases": [ 00:05:45.655 "df35f7ce-7957-44a1-b57c-f6c44eafc5d3" 00:05:45.655 ], 00:05:45.655 "product_name": "Malloc disk", 00:05:45.655 "block_size": 512, 00:05:45.655 "num_blocks": 16384, 00:05:45.655 "uuid": "df35f7ce-7957-44a1-b57c-f6c44eafc5d3", 00:05:45.655 "assigned_rate_limits": { 00:05:45.655 "rw_ios_per_sec": 0, 00:05:45.655 "rw_mbytes_per_sec": 0, 00:05:45.655 "r_mbytes_per_sec": 0, 00:05:45.655 "w_mbytes_per_sec": 0 00:05:45.655 }, 00:05:45.655 "claimed": true, 00:05:45.655 "claim_type": "exclusive_write", 00:05:45.655 "zoned": false, 00:05:45.655 "supported_io_types": { 00:05:45.655 "read": true, 00:05:45.655 "write": true, 00:05:45.655 "unmap": true, 00:05:45.655 "flush": true, 00:05:45.655 "reset": true, 00:05:45.655 "nvme_admin": false, 00:05:45.655 "nvme_io": false, 00:05:45.655 "nvme_io_md": false, 00:05:45.655 "write_zeroes": true, 00:05:45.655 "zcopy": true, 00:05:45.655 "get_zone_info": false, 00:05:45.655 "zone_management": false, 00:05:45.655 "zone_append": false, 00:05:45.655 "compare": false, 00:05:45.655 "compare_and_write": false, 00:05:45.655 "abort": true, 00:05:45.655 "seek_hole": false, 00:05:45.655 "seek_data": false, 00:05:45.655 "copy": true, 00:05:45.655 "nvme_iov_md": false 00:05:45.655 }, 00:05:45.655 "memory_domains": [ 00:05:45.655 { 00:05:45.655 "dma_device_id": "system", 00:05:45.655 "dma_device_type": 1 00:05:45.655 }, 00:05:45.655 { 00:05:45.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.655 "dma_device_type": 2 00:05:45.655 } 00:05:45.655 ], 00:05:45.655 "driver_specific": {} 00:05:45.655 }, 00:05:45.655 { 00:05:45.655 "name": "Passthru0", 00:05:45.655 "aliases": [ 00:05:45.655 "4cad4a8c-9664-58d2-b90f-533f99302c9e" 00:05:45.655 ], 00:05:45.655 "product_name": "passthru", 00:05:45.655 "block_size": 512, 00:05:45.655 "num_blocks": 16384, 00:05:45.655 "uuid": "4cad4a8c-9664-58d2-b90f-533f99302c9e", 00:05:45.655 "assigned_rate_limits": { 00:05:45.655 "rw_ios_per_sec": 0, 00:05:45.655 "rw_mbytes_per_sec": 0, 00:05:45.655 "r_mbytes_per_sec": 0, 00:05:45.655 "w_mbytes_per_sec": 0 00:05:45.655 }, 00:05:45.655 "claimed": false, 00:05:45.655 "zoned": false, 00:05:45.655 "supported_io_types": { 00:05:45.655 "read": true, 00:05:45.655 "write": true, 00:05:45.655 "unmap": true, 00:05:45.655 "flush": true, 00:05:45.655 "reset": true, 00:05:45.655 "nvme_admin": false, 00:05:45.655 "nvme_io": false, 00:05:45.655 "nvme_io_md": false, 00:05:45.655 "write_zeroes": true, 00:05:45.655 "zcopy": true, 00:05:45.655 "get_zone_info": false, 00:05:45.655 "zone_management": false, 00:05:45.655 "zone_append": false, 00:05:45.655 "compare": false, 00:05:45.655 "compare_and_write": false, 00:05:45.655 "abort": true, 00:05:45.655 "seek_hole": false, 00:05:45.655 "seek_data": false, 00:05:45.655 "copy": true, 00:05:45.655 "nvme_iov_md": false 00:05:45.655 }, 00:05:45.655 "memory_domains": [ 00:05:45.655 { 00:05:45.655 "dma_device_id": "system", 00:05:45.655 "dma_device_type": 1 00:05:45.655 }, 00:05:45.655 { 00:05:45.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.655 "dma_device_type": 2 00:05:45.655 } 00:05:45.655 ], 00:05:45.655 "driver_specific": { 00:05:45.655 "passthru": { 00:05:45.655 "name": "Passthru0", 00:05:45.655 "base_bdev_name": "Malloc2" 00:05:45.655 } 00:05:45.655 } 00:05:45.655 } 00:05:45.655 ]' 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.655 11:36:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.655 11:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.655 11:36:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.655 00:05:45.655 real 0m0.220s 00:05:45.655 user 0m0.130s 00:05:45.655 sys 0m0.029s 00:05:45.655 ************************************ 00:05:45.655 END TEST rpc_daemon_integrity 00:05:45.655 ************************************ 00:05:45.655 11:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.655 11:36:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.915 11:36:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.915 11:36:59 rpc -- rpc/rpc.sh@84 -- # killprocess 69401 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@950 -- # '[' -z 69401 ']' 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@954 -- # kill -0 69401 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@955 -- # uname 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69401 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.915 killing process with pid 69401 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69401' 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@969 -- # kill 69401 00:05:45.915 11:36:59 rpc -- common/autotest_common.sh@974 -- # wait 69401 00:05:46.175 00:05:46.175 real 0m2.240s 00:05:46.175 user 0m2.717s 00:05:46.175 sys 0m0.541s 00:05:46.175 11:36:59 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.175 11:36:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.175 ************************************ 00:05:46.175 END TEST rpc 00:05:46.175 ************************************ 00:05:46.175 11:36:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.175 11:36:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.175 11:36:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.175 11:36:59 -- common/autotest_common.sh@10 -- # set +x 00:05:46.175 ************************************ 00:05:46.175 START TEST skip_rpc 00:05:46.175 ************************************ 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.175 * Looking for test storage... 00:05:46.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.175 11:36:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.175 --rc genhtml_branch_coverage=1 00:05:46.175 --rc genhtml_function_coverage=1 00:05:46.175 --rc genhtml_legend=1 00:05:46.175 --rc geninfo_all_blocks=1 00:05:46.175 --rc geninfo_unexecuted_blocks=1 00:05:46.175 00:05:46.175 ' 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.175 --rc genhtml_branch_coverage=1 00:05:46.175 --rc genhtml_function_coverage=1 00:05:46.175 --rc genhtml_legend=1 00:05:46.175 --rc geninfo_all_blocks=1 00:05:46.175 --rc geninfo_unexecuted_blocks=1 00:05:46.175 00:05:46.175 ' 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.175 --rc genhtml_branch_coverage=1 00:05:46.175 --rc genhtml_function_coverage=1 00:05:46.175 --rc genhtml_legend=1 00:05:46.175 --rc geninfo_all_blocks=1 00:05:46.175 --rc geninfo_unexecuted_blocks=1 00:05:46.175 00:05:46.175 ' 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.175 --rc genhtml_branch_coverage=1 00:05:46.175 --rc genhtml_function_coverage=1 00:05:46.175 --rc genhtml_legend=1 00:05:46.175 --rc geninfo_all_blocks=1 00:05:46.175 --rc geninfo_unexecuted_blocks=1 00:05:46.175 00:05:46.175 ' 00:05:46.175 11:36:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.175 11:36:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.175 11:36:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.175 11:36:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.175 ************************************ 00:05:46.175 START TEST skip_rpc 00:05:46.175 ************************************ 00:05:46.175 11:36:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:46.175 11:36:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69597 00:05:46.175 11:36:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.175 11:36:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:46.175 11:36:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:46.435 [2024-11-19 11:36:59.639648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:46.435 [2024-11-19 11:36:59.639756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69597 ] 00:05:46.435 [2024-11-19 11:36:59.775279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.435 [2024-11-19 11:36:59.805993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69597 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69597 ']' 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69597 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69597 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.711 killing process with pid 69597 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69597' 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69597 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69597 00:05:51.711 00:05:51.711 real 0m5.269s 00:05:51.711 user 0m4.947s 00:05:51.711 sys 0m0.224s 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.711 ************************************ 00:05:51.711 END TEST skip_rpc 00:05:51.711 ************************************ 00:05:51.711 11:37:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.711 11:37:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:51.711 11:37:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.711 11:37:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.711 11:37:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.711 ************************************ 00:05:51.711 START TEST skip_rpc_with_json 00:05:51.711 ************************************ 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69679 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69679 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69679 ']' 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.711 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.712 11:37:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.712 [2024-11-19 11:37:04.961859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:51.712 [2024-11-19 11:37:04.961974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69679 ] 00:05:51.712 [2024-11-19 11:37:05.091430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.970 [2024-11-19 11:37:05.122492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.536 [2024-11-19 11:37:05.810655] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:52.536 request: 00:05:52.536 { 00:05:52.536 "trtype": "tcp", 00:05:52.536 "method": "nvmf_get_transports", 00:05:52.536 "req_id": 1 00:05:52.536 } 00:05:52.536 Got JSON-RPC error response 00:05:52.536 response: 00:05:52.536 { 00:05:52.536 "code": -19, 00:05:52.536 "message": "No such device" 00:05:52.536 } 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.536 [2024-11-19 11:37:05.822788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.536 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.794 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.794 11:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.794 { 00:05:52.794 "subsystems": [ 00:05:52.794 { 00:05:52.794 "subsystem": "fsdev", 00:05:52.794 "config": [ 00:05:52.794 { 00:05:52.794 "method": "fsdev_set_opts", 00:05:52.794 "params": { 00:05:52.794 "fsdev_io_pool_size": 65535, 00:05:52.794 "fsdev_io_cache_size": 256 00:05:52.794 } 00:05:52.794 } 00:05:52.794 ] 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "subsystem": "keyring", 00:05:52.794 "config": [] 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "subsystem": "iobuf", 00:05:52.794 "config": [ 00:05:52.794 { 00:05:52.794 "method": "iobuf_set_options", 00:05:52.794 "params": { 00:05:52.794 "small_pool_count": 8192, 00:05:52.794 "large_pool_count": 1024, 00:05:52.794 "small_bufsize": 8192, 00:05:52.794 "large_bufsize": 135168 00:05:52.794 } 00:05:52.794 } 00:05:52.794 ] 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "subsystem": "sock", 00:05:52.794 "config": [ 00:05:52.794 { 00:05:52.794 "method": "sock_set_default_impl", 00:05:52.794 "params": { 00:05:52.794 "impl_name": "posix" 00:05:52.794 } 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "method": "sock_impl_set_options", 00:05:52.794 "params": { 00:05:52.794 "impl_name": "ssl", 00:05:52.794 "recv_buf_size": 4096, 00:05:52.794 "send_buf_size": 4096, 00:05:52.794 "enable_recv_pipe": true, 00:05:52.794 "enable_quickack": false, 00:05:52.794 "enable_placement_id": 0, 00:05:52.794 "enable_zerocopy_send_server": true, 00:05:52.794 "enable_zerocopy_send_client": false, 00:05:52.794 "zerocopy_threshold": 0, 00:05:52.794 "tls_version": 0, 00:05:52.794 "enable_ktls": false 00:05:52.794 } 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "method": "sock_impl_set_options", 00:05:52.794 "params": { 00:05:52.794 "impl_name": "posix", 00:05:52.794 "recv_buf_size": 2097152, 00:05:52.794 "send_buf_size": 2097152, 00:05:52.794 "enable_recv_pipe": true, 00:05:52.794 "enable_quickack": false, 00:05:52.794 "enable_placement_id": 0, 00:05:52.794 "enable_zerocopy_send_server": true, 00:05:52.794 "enable_zerocopy_send_client": false, 00:05:52.794 "zerocopy_threshold": 0, 00:05:52.794 "tls_version": 0, 00:05:52.794 "enable_ktls": false 00:05:52.794 } 00:05:52.794 } 00:05:52.794 ] 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "subsystem": "vmd", 00:05:52.794 "config": [] 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "subsystem": "accel", 00:05:52.794 "config": [ 00:05:52.794 { 00:05:52.794 "method": "accel_set_options", 00:05:52.794 "params": { 00:05:52.794 "small_cache_size": 128, 00:05:52.794 "large_cache_size": 16, 00:05:52.794 "task_count": 2048, 00:05:52.794 "sequence_count": 2048, 00:05:52.794 "buf_count": 2048 00:05:52.794 } 00:05:52.794 } 00:05:52.794 ] 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "subsystem": "bdev", 00:05:52.794 "config": [ 00:05:52.794 { 00:05:52.794 "method": "bdev_set_options", 00:05:52.794 "params": { 00:05:52.794 "bdev_io_pool_size": 65535, 00:05:52.794 "bdev_io_cache_size": 256, 00:05:52.794 "bdev_auto_examine": true, 00:05:52.794 "iobuf_small_cache_size": 128, 00:05:52.794 "iobuf_large_cache_size": 16 00:05:52.794 } 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "method": "bdev_raid_set_options", 00:05:52.794 "params": { 00:05:52.794 "process_window_size_kb": 1024, 00:05:52.794 "process_max_bandwidth_mb_sec": 0 00:05:52.794 } 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "method": "bdev_iscsi_set_options", 00:05:52.794 "params": { 00:05:52.794 "timeout_sec": 30 00:05:52.794 } 00:05:52.794 }, 00:05:52.794 { 00:05:52.794 "method": "bdev_nvme_set_options", 00:05:52.794 "params": { 00:05:52.794 "action_on_timeout": "none", 00:05:52.794 "timeout_us": 0, 00:05:52.794 "timeout_admin_us": 0, 00:05:52.794 "keep_alive_timeout_ms": 10000, 00:05:52.794 "arbitration_burst": 0, 00:05:52.794 "low_priority_weight": 0, 00:05:52.795 "medium_priority_weight": 0, 00:05:52.795 "high_priority_weight": 0, 00:05:52.795 "nvme_adminq_poll_period_us": 10000, 00:05:52.795 "nvme_ioq_poll_period_us": 0, 00:05:52.795 "io_queue_requests": 0, 00:05:52.795 "delay_cmd_submit": true, 00:05:52.795 "transport_retry_count": 4, 00:05:52.795 "bdev_retry_count": 3, 00:05:52.795 "transport_ack_timeout": 0, 00:05:52.795 "ctrlr_loss_timeout_sec": 0, 00:05:52.795 "reconnect_delay_sec": 0, 00:05:52.795 "fast_io_fail_timeout_sec": 0, 00:05:52.795 "disable_auto_failback": false, 00:05:52.795 "generate_uuids": false, 00:05:52.795 "transport_tos": 0, 00:05:52.795 "nvme_error_stat": false, 00:05:52.795 "rdma_srq_size": 0, 00:05:52.795 "io_path_stat": false, 00:05:52.795 "allow_accel_sequence": false, 00:05:52.795 "rdma_max_cq_size": 0, 00:05:52.795 "rdma_cm_event_timeout_ms": 0, 00:05:52.795 "dhchap_digests": [ 00:05:52.795 "sha256", 00:05:52.795 "sha384", 00:05:52.795 "sha512" 00:05:52.795 ], 00:05:52.795 "dhchap_dhgroups": [ 00:05:52.795 "null", 00:05:52.795 "ffdhe2048", 00:05:52.795 "ffdhe3072", 00:05:52.795 "ffdhe4096", 00:05:52.795 "ffdhe6144", 00:05:52.795 "ffdhe8192" 00:05:52.795 ] 00:05:52.795 } 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "method": "bdev_nvme_set_hotplug", 00:05:52.795 "params": { 00:05:52.795 "period_us": 100000, 00:05:52.795 "enable": false 00:05:52.795 } 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "method": "bdev_wait_for_examine" 00:05:52.795 } 00:05:52.795 ] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "scsi", 00:05:52.795 "config": null 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "scheduler", 00:05:52.795 "config": [ 00:05:52.795 { 00:05:52.795 "method": "framework_set_scheduler", 00:05:52.795 "params": { 00:05:52.795 "name": "static" 00:05:52.795 } 00:05:52.795 } 00:05:52.795 ] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "vhost_scsi", 00:05:52.795 "config": [] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "vhost_blk", 00:05:52.795 "config": [] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "ublk", 00:05:52.795 "config": [] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "nbd", 00:05:52.795 "config": [] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "nvmf", 00:05:52.795 "config": [ 00:05:52.795 { 00:05:52.795 "method": "nvmf_set_config", 00:05:52.795 "params": { 00:05:52.795 "discovery_filter": "match_any", 00:05:52.795 "admin_cmd_passthru": { 00:05:52.795 "identify_ctrlr": false 00:05:52.795 }, 00:05:52.795 "dhchap_digests": [ 00:05:52.795 "sha256", 00:05:52.795 "sha384", 00:05:52.795 "sha512" 00:05:52.795 ], 00:05:52.795 "dhchap_dhgroups": [ 00:05:52.795 "null", 00:05:52.795 "ffdhe2048", 00:05:52.795 "ffdhe3072", 00:05:52.795 "ffdhe4096", 00:05:52.795 "ffdhe6144", 00:05:52.795 "ffdhe8192" 00:05:52.795 ] 00:05:52.795 } 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "method": "nvmf_set_max_subsystems", 00:05:52.795 "params": { 00:05:52.795 "max_subsystems": 1024 00:05:52.795 } 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "method": "nvmf_set_crdt", 00:05:52.795 "params": { 00:05:52.795 "crdt1": 0, 00:05:52.795 "crdt2": 0, 00:05:52.795 "crdt3": 0 00:05:52.795 } 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "method": "nvmf_create_transport", 00:05:52.795 "params": { 00:05:52.795 "trtype": "TCP", 00:05:52.795 "max_queue_depth": 128, 00:05:52.795 "max_io_qpairs_per_ctrlr": 127, 00:05:52.795 "in_capsule_data_size": 4096, 00:05:52.795 "max_io_size": 131072, 00:05:52.795 "io_unit_size": 131072, 00:05:52.795 "max_aq_depth": 128, 00:05:52.795 "num_shared_buffers": 511, 00:05:52.795 "buf_cache_size": 4294967295, 00:05:52.795 "dif_insert_or_strip": false, 00:05:52.795 "zcopy": false, 00:05:52.795 "c2h_success": true, 00:05:52.795 "sock_priority": 0, 00:05:52.795 "abort_timeout_sec": 1, 00:05:52.795 "ack_timeout": 0, 00:05:52.795 "data_wr_pool_size": 0 00:05:52.795 } 00:05:52.795 } 00:05:52.795 ] 00:05:52.795 }, 00:05:52.795 { 00:05:52.795 "subsystem": "iscsi", 00:05:52.795 "config": [ 00:05:52.795 { 00:05:52.795 "method": "iscsi_set_options", 00:05:52.795 "params": { 00:05:52.795 "node_base": "iqn.2016-06.io.spdk", 00:05:52.795 "max_sessions": 128, 00:05:52.795 "max_connections_per_session": 2, 00:05:52.795 "max_queue_depth": 64, 00:05:52.795 "default_time2wait": 2, 00:05:52.795 "default_time2retain": 20, 00:05:52.795 "first_burst_length": 8192, 00:05:52.795 "immediate_data": true, 00:05:52.795 "allow_duplicated_isid": false, 00:05:52.795 "error_recovery_level": 0, 00:05:52.795 "nop_timeout": 60, 00:05:52.795 "nop_in_interval": 30, 00:05:52.795 "disable_chap": false, 00:05:52.795 "require_chap": false, 00:05:52.795 "mutual_chap": false, 00:05:52.795 "chap_group": 0, 00:05:52.795 "max_large_datain_per_connection": 64, 00:05:52.795 "max_r2t_per_connection": 4, 00:05:52.795 "pdu_pool_size": 36864, 00:05:52.795 "immediate_data_pool_size": 16384, 00:05:52.795 "data_out_pool_size": 2048 00:05:52.795 } 00:05:52.795 } 00:05:52.795 ] 00:05:52.795 } 00:05:52.795 ] 00:05:52.795 } 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69679 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69679 ']' 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69679 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.795 11:37:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69679 00:05:52.795 11:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.795 11:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.795 killing process with pid 69679 00:05:52.795 11:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69679' 00:05:52.795 11:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69679 00:05:52.795 11:37:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69679 00:05:53.053 11:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69708 00:05:53.053 11:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.053 11:37:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69708 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69708 ']' 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69708 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69708 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.341 killing process with pid 69708 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69708' 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69708 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69708 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:58.341 00:05:58.341 real 0m6.645s 00:05:58.341 user 0m6.361s 00:05:58.341 sys 0m0.513s 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.341 ************************************ 00:05:58.341 END TEST skip_rpc_with_json 00:05:58.341 ************************************ 00:05:58.341 11:37:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:58.341 11:37:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.341 11:37:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.341 11:37:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.341 ************************************ 00:05:58.341 START TEST skip_rpc_with_delay 00:05:58.341 ************************************ 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.341 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:58.342 [2024-11-19 11:37:11.670715] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:58.342 [2024-11-19 11:37:11.670831] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:58.342 ************************************ 00:05:58.342 END TEST skip_rpc_with_delay 00:05:58.342 ************************************ 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:58.342 00:05:58.342 real 0m0.116s 00:05:58.342 user 0m0.065s 00:05:58.342 sys 0m0.050s 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.342 11:37:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:58.601 11:37:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:58.601 11:37:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:58.601 11:37:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:58.601 11:37:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.601 11:37:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.601 11:37:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.601 ************************************ 00:05:58.601 START TEST exit_on_failed_rpc_init 00:05:58.601 ************************************ 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:58.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69819 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69819 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69819 ']' 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.601 11:37:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.601 [2024-11-19 11:37:11.875296] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:58.601 [2024-11-19 11:37:11.875499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69819 ] 00:05:58.859 [2024-11-19 11:37:12.016793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.859 [2024-11-19 11:37:12.047435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.424 11:37:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:59.424 [2024-11-19 11:37:12.796733] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:59.424 [2024-11-19 11:37:12.796993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69837 ] 00:05:59.682 [2024-11-19 11:37:12.931869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.682 [2024-11-19 11:37:12.962794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.682 [2024-11-19 11:37:12.962881] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:59.682 [2024-11-19 11:37:12.962896] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:59.682 [2024-11-19 11:37:12.962916] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69819 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69819 ']' 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69819 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69819 00:05:59.682 killing process with pid 69819 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69819' 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69819 00:05:59.682 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69819 00:05:59.940 ************************************ 00:05:59.940 END TEST exit_on_failed_rpc_init 00:05:59.940 ************************************ 00:05:59.940 00:05:59.940 real 0m1.529s 00:05:59.940 user 0m1.719s 00:05:59.940 sys 0m0.379s 00:05:59.940 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.940 11:37:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.940 11:37:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:59.940 00:05:59.940 real 0m13.923s 00:05:59.940 user 0m13.224s 00:05:59.940 sys 0m1.345s 00:05:59.940 11:37:13 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.940 11:37:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.940 ************************************ 00:05:59.940 END TEST skip_rpc 00:05:59.940 ************************************ 00:06:00.200 11:37:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:00.200 11:37:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.200 11:37:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.200 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.200 ************************************ 00:06:00.200 START TEST rpc_client 00:06:00.200 ************************************ 00:06:00.200 11:37:13 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:00.200 * Looking for test storage... 00:06:00.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:00.200 11:37:13 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.200 11:37:13 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.200 11:37:13 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.200 11:37:13 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.200 11:37:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.201 11:37:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.201 --rc genhtml_branch_coverage=1 00:06:00.201 --rc genhtml_function_coverage=1 00:06:00.201 --rc genhtml_legend=1 00:06:00.201 --rc geninfo_all_blocks=1 00:06:00.201 --rc geninfo_unexecuted_blocks=1 00:06:00.201 00:06:00.201 ' 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.201 --rc genhtml_branch_coverage=1 00:06:00.201 --rc genhtml_function_coverage=1 00:06:00.201 --rc genhtml_legend=1 00:06:00.201 --rc geninfo_all_blocks=1 00:06:00.201 --rc geninfo_unexecuted_blocks=1 00:06:00.201 00:06:00.201 ' 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.201 --rc genhtml_branch_coverage=1 00:06:00.201 --rc genhtml_function_coverage=1 00:06:00.201 --rc genhtml_legend=1 00:06:00.201 --rc geninfo_all_blocks=1 00:06:00.201 --rc geninfo_unexecuted_blocks=1 00:06:00.201 00:06:00.201 ' 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.201 --rc genhtml_branch_coverage=1 00:06:00.201 --rc genhtml_function_coverage=1 00:06:00.201 --rc genhtml_legend=1 00:06:00.201 --rc geninfo_all_blocks=1 00:06:00.201 --rc geninfo_unexecuted_blocks=1 00:06:00.201 00:06:00.201 ' 00:06:00.201 11:37:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:00.201 OK 00:06:00.201 11:37:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.201 00:06:00.201 real 0m0.187s 00:06:00.201 user 0m0.105s 00:06:00.201 sys 0m0.088s 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.201 11:37:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:00.201 ************************************ 00:06:00.201 END TEST rpc_client 00:06:00.201 ************************************ 00:06:00.460 11:37:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:00.460 11:37:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.460 11:37:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.460 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.460 ************************************ 00:06:00.460 START TEST json_config 00:06:00.460 ************************************ 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.460 11:37:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.460 11:37:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.460 11:37:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.460 11:37:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.460 11:37:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.460 11:37:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:00.460 11:37:13 json_config -- scripts/common.sh@345 -- # : 1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.460 11:37:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.460 11:37:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@353 -- # local d=1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.460 11:37:13 json_config -- scripts/common.sh@355 -- # echo 1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.460 11:37:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@353 -- # local d=2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.460 11:37:13 json_config -- scripts/common.sh@355 -- # echo 2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.460 11:37:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.460 11:37:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.460 11:37:13 json_config -- scripts/common.sh@368 -- # return 0 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.460 --rc genhtml_branch_coverage=1 00:06:00.460 --rc genhtml_function_coverage=1 00:06:00.460 --rc genhtml_legend=1 00:06:00.460 --rc geninfo_all_blocks=1 00:06:00.460 --rc geninfo_unexecuted_blocks=1 00:06:00.460 00:06:00.460 ' 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.460 --rc genhtml_branch_coverage=1 00:06:00.460 --rc genhtml_function_coverage=1 00:06:00.460 --rc genhtml_legend=1 00:06:00.460 --rc geninfo_all_blocks=1 00:06:00.460 --rc geninfo_unexecuted_blocks=1 00:06:00.460 00:06:00.460 ' 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.460 --rc genhtml_branch_coverage=1 00:06:00.460 --rc genhtml_function_coverage=1 00:06:00.460 --rc genhtml_legend=1 00:06:00.460 --rc geninfo_all_blocks=1 00:06:00.460 --rc geninfo_unexecuted_blocks=1 00:06:00.460 00:06:00.460 ' 00:06:00.460 11:37:13 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.460 --rc genhtml_branch_coverage=1 00:06:00.460 --rc genhtml_function_coverage=1 00:06:00.460 --rc genhtml_legend=1 00:06:00.460 --rc geninfo_all_blocks=1 00:06:00.460 --rc geninfo_unexecuted_blocks=1 00:06:00.460 00:06:00.460 ' 00:06:00.460 11:37:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.460 11:37:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:229814c7-2b97-487f-9cba-d8dde402b6db 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=229814c7-2b97-487f-9cba-d8dde402b6db 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.461 11:37:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.461 11:37:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.461 11:37:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.461 11:37:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.461 11:37:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.461 11:37:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.461 11:37:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.461 11:37:13 json_config -- paths/export.sh@5 -- # export PATH 00:06:00.461 11:37:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@51 -- # : 0 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.461 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.461 11:37:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.461 WARNING: No tests are enabled so not running JSON configuration tests 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:00.461 11:37:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:00.461 00:06:00.461 real 0m0.139s 00:06:00.461 user 0m0.088s 00:06:00.461 sys 0m0.051s 00:06:00.461 11:37:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.461 ************************************ 00:06:00.461 11:37:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:00.461 END TEST json_config 00:06:00.461 ************************************ 00:06:00.461 11:37:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:00.461 11:37:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.461 11:37:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.461 11:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.461 ************************************ 00:06:00.461 START TEST json_config_extra_key 00:06:00.461 ************************************ 00:06:00.461 11:37:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:00.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.720 --rc genhtml_branch_coverage=1 00:06:00.720 --rc genhtml_function_coverage=1 00:06:00.720 --rc genhtml_legend=1 00:06:00.720 --rc geninfo_all_blocks=1 00:06:00.720 --rc geninfo_unexecuted_blocks=1 00:06:00.720 00:06:00.720 ' 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:00.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.720 --rc genhtml_branch_coverage=1 00:06:00.720 --rc genhtml_function_coverage=1 00:06:00.720 --rc genhtml_legend=1 00:06:00.720 --rc geninfo_all_blocks=1 00:06:00.720 --rc geninfo_unexecuted_blocks=1 00:06:00.720 00:06:00.720 ' 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:00.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.720 --rc genhtml_branch_coverage=1 00:06:00.720 --rc genhtml_function_coverage=1 00:06:00.720 --rc genhtml_legend=1 00:06:00.720 --rc geninfo_all_blocks=1 00:06:00.720 --rc geninfo_unexecuted_blocks=1 00:06:00.720 00:06:00.720 ' 00:06:00.720 11:37:13 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:00.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.720 --rc genhtml_branch_coverage=1 00:06:00.720 --rc genhtml_function_coverage=1 00:06:00.720 --rc genhtml_legend=1 00:06:00.720 --rc geninfo_all_blocks=1 00:06:00.720 --rc geninfo_unexecuted_blocks=1 00:06:00.720 00:06:00.720 ' 00:06:00.720 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:229814c7-2b97-487f-9cba-d8dde402b6db 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=229814c7-2b97-487f-9cba-d8dde402b6db 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.720 11:37:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.720 11:37:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.720 11:37:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.721 11:37:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.721 11:37:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.721 11:37:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:00.721 11:37:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.721 11:37:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:00.721 INFO: launching applications... 00:06:00.721 11:37:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70014 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:00.721 Waiting for target to run... 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70014 /var/tmp/spdk_tgt.sock 00:06:00.721 11:37:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70014 ']' 00:06:00.721 11:37:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.721 11:37:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.721 11:37:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:00.721 11:37:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.721 11:37:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.721 11:37:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:00.721 [2024-11-19 11:37:14.049661] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:00.721 [2024-11-19 11:37:14.049785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:06:00.978 [2024-11-19 11:37:14.340623] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.978 [2024-11-19 11:37:14.359298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.568 00:06:01.568 INFO: shutting down applications... 00:06:01.568 11:37:14 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.568 11:37:14 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:01.568 11:37:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:01.568 11:37:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70014 ]] 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70014 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70014 00:06:01.568 11:37:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70014 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.137 11:37:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.137 SPDK target shutdown done 00:06:02.137 11:37:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:02.137 Success 00:06:02.137 00:06:02.137 real 0m1.565s 00:06:02.137 user 0m1.276s 00:06:02.137 sys 0m0.341s 00:06:02.137 11:37:15 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.137 11:37:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.137 ************************************ 00:06:02.137 END TEST json_config_extra_key 00:06:02.137 ************************************ 00:06:02.137 11:37:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.137 11:37:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.137 11:37:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.137 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:02.137 ************************************ 00:06:02.137 START TEST alias_rpc 00:06:02.137 ************************************ 00:06:02.137 11:37:15 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:02.137 * Looking for test storage... 00:06:02.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:02.137 11:37:15 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.137 11:37:15 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.137 11:37:15 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.397 11:37:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.397 --rc genhtml_branch_coverage=1 00:06:02.397 --rc genhtml_function_coverage=1 00:06:02.397 --rc genhtml_legend=1 00:06:02.397 --rc geninfo_all_blocks=1 00:06:02.397 --rc geninfo_unexecuted_blocks=1 00:06:02.397 00:06:02.397 ' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.397 --rc genhtml_branch_coverage=1 00:06:02.397 --rc genhtml_function_coverage=1 00:06:02.397 --rc genhtml_legend=1 00:06:02.397 --rc geninfo_all_blocks=1 00:06:02.397 --rc geninfo_unexecuted_blocks=1 00:06:02.397 00:06:02.397 ' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.397 --rc genhtml_branch_coverage=1 00:06:02.397 --rc genhtml_function_coverage=1 00:06:02.397 --rc genhtml_legend=1 00:06:02.397 --rc geninfo_all_blocks=1 00:06:02.397 --rc geninfo_unexecuted_blocks=1 00:06:02.397 00:06:02.397 ' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.397 --rc genhtml_branch_coverage=1 00:06:02.397 --rc genhtml_function_coverage=1 00:06:02.397 --rc genhtml_legend=1 00:06:02.397 --rc geninfo_all_blocks=1 00:06:02.397 --rc geninfo_unexecuted_blocks=1 00:06:02.397 00:06:02.397 ' 00:06:02.397 11:37:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.397 11:37:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70088 00:06:02.397 11:37:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70088 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70088 ']' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.397 11:37:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.397 11:37:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.397 [2024-11-19 11:37:15.691846] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:02.397 [2024-11-19 11:37:15.692173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70088 ] 00:06:02.656 [2024-11-19 11:37:15.825960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.656 [2024-11-19 11:37:15.878001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.221 11:37:16 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.221 11:37:16 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.221 11:37:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:03.480 11:37:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70088 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70088 ']' 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70088 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70088 00:06:03.480 killing process with pid 70088 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70088' 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@969 -- # kill 70088 00:06:03.480 11:37:16 alias_rpc -- common/autotest_common.sh@974 -- # wait 70088 00:06:03.740 ************************************ 00:06:03.740 END TEST alias_rpc 00:06:03.740 ************************************ 00:06:03.740 00:06:03.740 real 0m1.584s 00:06:03.740 user 0m1.654s 00:06:03.740 sys 0m0.437s 00:06:03.740 11:37:17 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.740 11:37:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.740 11:37:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:03.740 11:37:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:03.740 11:37:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.740 11:37:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.740 11:37:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.740 ************************************ 00:06:03.740 START TEST spdkcli_tcp 00:06:03.740 ************************************ 00:06:03.740 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:03.999 * Looking for test storage... 00:06:03.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:03.999 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.999 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.999 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.999 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.999 11:37:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.999 11:37:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.000 11:37:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.000 --rc genhtml_branch_coverage=1 00:06:04.000 --rc genhtml_function_coverage=1 00:06:04.000 --rc genhtml_legend=1 00:06:04.000 --rc geninfo_all_blocks=1 00:06:04.000 --rc geninfo_unexecuted_blocks=1 00:06:04.000 00:06:04.000 ' 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.000 --rc genhtml_branch_coverage=1 00:06:04.000 --rc genhtml_function_coverage=1 00:06:04.000 --rc genhtml_legend=1 00:06:04.000 --rc geninfo_all_blocks=1 00:06:04.000 --rc geninfo_unexecuted_blocks=1 00:06:04.000 00:06:04.000 ' 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.000 --rc genhtml_branch_coverage=1 00:06:04.000 --rc genhtml_function_coverage=1 00:06:04.000 --rc genhtml_legend=1 00:06:04.000 --rc geninfo_all_blocks=1 00:06:04.000 --rc geninfo_unexecuted_blocks=1 00:06:04.000 00:06:04.000 ' 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.000 --rc genhtml_branch_coverage=1 00:06:04.000 --rc genhtml_function_coverage=1 00:06:04.000 --rc genhtml_legend=1 00:06:04.000 --rc geninfo_all_blocks=1 00:06:04.000 --rc geninfo_unexecuted_blocks=1 00:06:04.000 00:06:04.000 ' 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70167 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70167 00:06:04.000 11:37:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:04.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70167 ']' 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.000 11:37:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 [2024-11-19 11:37:17.312388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:04.000 [2024-11-19 11:37:17.312521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70167 ] 00:06:04.257 [2024-11-19 11:37:17.450875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.257 [2024-11-19 11:37:17.485648] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.257 [2024-11-19 11:37:17.485699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.825 11:37:18 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.825 11:37:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:04.825 11:37:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70184 00:06:04.825 11:37:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:04.825 11:37:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:05.084 [ 00:06:05.084 "bdev_malloc_delete", 00:06:05.084 "bdev_malloc_create", 00:06:05.084 "bdev_null_resize", 00:06:05.084 "bdev_null_delete", 00:06:05.084 "bdev_null_create", 00:06:05.084 "bdev_nvme_cuse_unregister", 00:06:05.084 "bdev_nvme_cuse_register", 00:06:05.084 "bdev_opal_new_user", 00:06:05.084 "bdev_opal_set_lock_state", 00:06:05.084 "bdev_opal_delete", 00:06:05.084 "bdev_opal_get_info", 00:06:05.084 "bdev_opal_create", 00:06:05.084 "bdev_nvme_opal_revert", 00:06:05.084 "bdev_nvme_opal_init", 00:06:05.084 "bdev_nvme_send_cmd", 00:06:05.084 "bdev_nvme_set_keys", 00:06:05.084 "bdev_nvme_get_path_iostat", 00:06:05.084 "bdev_nvme_get_mdns_discovery_info", 00:06:05.084 "bdev_nvme_stop_mdns_discovery", 00:06:05.084 "bdev_nvme_start_mdns_discovery", 00:06:05.084 "bdev_nvme_set_multipath_policy", 00:06:05.084 "bdev_nvme_set_preferred_path", 00:06:05.084 "bdev_nvme_get_io_paths", 00:06:05.084 "bdev_nvme_remove_error_injection", 00:06:05.084 "bdev_nvme_add_error_injection", 00:06:05.084 "bdev_nvme_get_discovery_info", 00:06:05.084 "bdev_nvme_stop_discovery", 00:06:05.084 "bdev_nvme_start_discovery", 00:06:05.084 "bdev_nvme_get_controller_health_info", 00:06:05.084 "bdev_nvme_disable_controller", 00:06:05.084 "bdev_nvme_enable_controller", 00:06:05.084 "bdev_nvme_reset_controller", 00:06:05.084 "bdev_nvme_get_transport_statistics", 00:06:05.084 "bdev_nvme_apply_firmware", 00:06:05.084 "bdev_nvme_detach_controller", 00:06:05.084 "bdev_nvme_get_controllers", 00:06:05.084 "bdev_nvme_attach_controller", 00:06:05.084 "bdev_nvme_set_hotplug", 00:06:05.084 "bdev_nvme_set_options", 00:06:05.084 "bdev_passthru_delete", 00:06:05.084 "bdev_passthru_create", 00:06:05.084 "bdev_lvol_set_parent_bdev", 00:06:05.084 "bdev_lvol_set_parent", 00:06:05.084 "bdev_lvol_check_shallow_copy", 00:06:05.084 "bdev_lvol_start_shallow_copy", 00:06:05.084 "bdev_lvol_grow_lvstore", 00:06:05.084 "bdev_lvol_get_lvols", 00:06:05.085 "bdev_lvol_get_lvstores", 00:06:05.085 "bdev_lvol_delete", 00:06:05.085 "bdev_lvol_set_read_only", 00:06:05.085 "bdev_lvol_resize", 00:06:05.085 "bdev_lvol_decouple_parent", 00:06:05.085 "bdev_lvol_inflate", 00:06:05.085 "bdev_lvol_rename", 00:06:05.085 "bdev_lvol_clone_bdev", 00:06:05.085 "bdev_lvol_clone", 00:06:05.085 "bdev_lvol_snapshot", 00:06:05.085 "bdev_lvol_create", 00:06:05.085 "bdev_lvol_delete_lvstore", 00:06:05.085 "bdev_lvol_rename_lvstore", 00:06:05.085 "bdev_lvol_create_lvstore", 00:06:05.085 "bdev_raid_set_options", 00:06:05.085 "bdev_raid_remove_base_bdev", 00:06:05.085 "bdev_raid_add_base_bdev", 00:06:05.085 "bdev_raid_delete", 00:06:05.085 "bdev_raid_create", 00:06:05.085 "bdev_raid_get_bdevs", 00:06:05.085 "bdev_error_inject_error", 00:06:05.085 "bdev_error_delete", 00:06:05.085 "bdev_error_create", 00:06:05.085 "bdev_split_delete", 00:06:05.085 "bdev_split_create", 00:06:05.085 "bdev_delay_delete", 00:06:05.085 "bdev_delay_create", 00:06:05.085 "bdev_delay_update_latency", 00:06:05.085 "bdev_zone_block_delete", 00:06:05.085 "bdev_zone_block_create", 00:06:05.085 "blobfs_create", 00:06:05.085 "blobfs_detect", 00:06:05.085 "blobfs_set_cache_size", 00:06:05.085 "bdev_xnvme_delete", 00:06:05.085 "bdev_xnvme_create", 00:06:05.085 "bdev_aio_delete", 00:06:05.085 "bdev_aio_rescan", 00:06:05.085 "bdev_aio_create", 00:06:05.085 "bdev_ftl_set_property", 00:06:05.085 "bdev_ftl_get_properties", 00:06:05.085 "bdev_ftl_get_stats", 00:06:05.085 "bdev_ftl_unmap", 00:06:05.085 "bdev_ftl_unload", 00:06:05.085 "bdev_ftl_delete", 00:06:05.085 "bdev_ftl_load", 00:06:05.085 "bdev_ftl_create", 00:06:05.085 "bdev_virtio_attach_controller", 00:06:05.085 "bdev_virtio_scsi_get_devices", 00:06:05.085 "bdev_virtio_detach_controller", 00:06:05.085 "bdev_virtio_blk_set_hotplug", 00:06:05.085 "bdev_iscsi_delete", 00:06:05.085 "bdev_iscsi_create", 00:06:05.085 "bdev_iscsi_set_options", 00:06:05.085 "accel_error_inject_error", 00:06:05.085 "ioat_scan_accel_module", 00:06:05.085 "dsa_scan_accel_module", 00:06:05.085 "iaa_scan_accel_module", 00:06:05.085 "keyring_file_remove_key", 00:06:05.085 "keyring_file_add_key", 00:06:05.085 "keyring_linux_set_options", 00:06:05.085 "fsdev_aio_delete", 00:06:05.085 "fsdev_aio_create", 00:06:05.085 "iscsi_get_histogram", 00:06:05.085 "iscsi_enable_histogram", 00:06:05.085 "iscsi_set_options", 00:06:05.085 "iscsi_get_auth_groups", 00:06:05.085 "iscsi_auth_group_remove_secret", 00:06:05.085 "iscsi_auth_group_add_secret", 00:06:05.085 "iscsi_delete_auth_group", 00:06:05.085 "iscsi_create_auth_group", 00:06:05.085 "iscsi_set_discovery_auth", 00:06:05.085 "iscsi_get_options", 00:06:05.085 "iscsi_target_node_request_logout", 00:06:05.085 "iscsi_target_node_set_redirect", 00:06:05.085 "iscsi_target_node_set_auth", 00:06:05.085 "iscsi_target_node_add_lun", 00:06:05.085 "iscsi_get_stats", 00:06:05.085 "iscsi_get_connections", 00:06:05.085 "iscsi_portal_group_set_auth", 00:06:05.085 "iscsi_start_portal_group", 00:06:05.085 "iscsi_delete_portal_group", 00:06:05.085 "iscsi_create_portal_group", 00:06:05.085 "iscsi_get_portal_groups", 00:06:05.085 "iscsi_delete_target_node", 00:06:05.085 "iscsi_target_node_remove_pg_ig_maps", 00:06:05.085 "iscsi_target_node_add_pg_ig_maps", 00:06:05.085 "iscsi_create_target_node", 00:06:05.085 "iscsi_get_target_nodes", 00:06:05.085 "iscsi_delete_initiator_group", 00:06:05.085 "iscsi_initiator_group_remove_initiators", 00:06:05.085 "iscsi_initiator_group_add_initiators", 00:06:05.085 "iscsi_create_initiator_group", 00:06:05.085 "iscsi_get_initiator_groups", 00:06:05.085 "nvmf_set_crdt", 00:06:05.085 "nvmf_set_config", 00:06:05.085 "nvmf_set_max_subsystems", 00:06:05.085 "nvmf_stop_mdns_prr", 00:06:05.085 "nvmf_publish_mdns_prr", 00:06:05.085 "nvmf_subsystem_get_listeners", 00:06:05.085 "nvmf_subsystem_get_qpairs", 00:06:05.085 "nvmf_subsystem_get_controllers", 00:06:05.085 "nvmf_get_stats", 00:06:05.085 "nvmf_get_transports", 00:06:05.085 "nvmf_create_transport", 00:06:05.085 "nvmf_get_targets", 00:06:05.085 "nvmf_delete_target", 00:06:05.085 "nvmf_create_target", 00:06:05.085 "nvmf_subsystem_allow_any_host", 00:06:05.085 "nvmf_subsystem_set_keys", 00:06:05.085 "nvmf_subsystem_remove_host", 00:06:05.085 "nvmf_subsystem_add_host", 00:06:05.085 "nvmf_ns_remove_host", 00:06:05.085 "nvmf_ns_add_host", 00:06:05.085 "nvmf_subsystem_remove_ns", 00:06:05.085 "nvmf_subsystem_set_ns_ana_group", 00:06:05.085 "nvmf_subsystem_add_ns", 00:06:05.085 "nvmf_subsystem_listener_set_ana_state", 00:06:05.085 "nvmf_discovery_get_referrals", 00:06:05.085 "nvmf_discovery_remove_referral", 00:06:05.085 "nvmf_discovery_add_referral", 00:06:05.085 "nvmf_subsystem_remove_listener", 00:06:05.085 "nvmf_subsystem_add_listener", 00:06:05.085 "nvmf_delete_subsystem", 00:06:05.085 "nvmf_create_subsystem", 00:06:05.085 "nvmf_get_subsystems", 00:06:05.085 "env_dpdk_get_mem_stats", 00:06:05.085 "nbd_get_disks", 00:06:05.085 "nbd_stop_disk", 00:06:05.085 "nbd_start_disk", 00:06:05.085 "ublk_recover_disk", 00:06:05.085 "ublk_get_disks", 00:06:05.085 "ublk_stop_disk", 00:06:05.085 "ublk_start_disk", 00:06:05.085 "ublk_destroy_target", 00:06:05.085 "ublk_create_target", 00:06:05.085 "virtio_blk_create_transport", 00:06:05.085 "virtio_blk_get_transports", 00:06:05.085 "vhost_controller_set_coalescing", 00:06:05.085 "vhost_get_controllers", 00:06:05.085 "vhost_delete_controller", 00:06:05.085 "vhost_create_blk_controller", 00:06:05.085 "vhost_scsi_controller_remove_target", 00:06:05.085 "vhost_scsi_controller_add_target", 00:06:05.085 "vhost_start_scsi_controller", 00:06:05.085 "vhost_create_scsi_controller", 00:06:05.085 "thread_set_cpumask", 00:06:05.085 "scheduler_set_options", 00:06:05.085 "framework_get_governor", 00:06:05.085 "framework_get_scheduler", 00:06:05.085 "framework_set_scheduler", 00:06:05.085 "framework_get_reactors", 00:06:05.085 "thread_get_io_channels", 00:06:05.085 "thread_get_pollers", 00:06:05.085 "thread_get_stats", 00:06:05.085 "framework_monitor_context_switch", 00:06:05.085 "spdk_kill_instance", 00:06:05.085 "log_enable_timestamps", 00:06:05.085 "log_get_flags", 00:06:05.085 "log_clear_flag", 00:06:05.085 "log_set_flag", 00:06:05.085 "log_get_level", 00:06:05.085 "log_set_level", 00:06:05.085 "log_get_print_level", 00:06:05.085 "log_set_print_level", 00:06:05.085 "framework_enable_cpumask_locks", 00:06:05.085 "framework_disable_cpumask_locks", 00:06:05.085 "framework_wait_init", 00:06:05.085 "framework_start_init", 00:06:05.085 "scsi_get_devices", 00:06:05.085 "bdev_get_histogram", 00:06:05.085 "bdev_enable_histogram", 00:06:05.085 "bdev_set_qos_limit", 00:06:05.085 "bdev_set_qd_sampling_period", 00:06:05.085 "bdev_get_bdevs", 00:06:05.085 "bdev_reset_iostat", 00:06:05.085 "bdev_get_iostat", 00:06:05.085 "bdev_examine", 00:06:05.085 "bdev_wait_for_examine", 00:06:05.085 "bdev_set_options", 00:06:05.085 "accel_get_stats", 00:06:05.085 "accel_set_options", 00:06:05.085 "accel_set_driver", 00:06:05.085 "accel_crypto_key_destroy", 00:06:05.085 "accel_crypto_keys_get", 00:06:05.085 "accel_crypto_key_create", 00:06:05.085 "accel_assign_opc", 00:06:05.085 "accel_get_module_info", 00:06:05.085 "accel_get_opc_assignments", 00:06:05.085 "vmd_rescan", 00:06:05.085 "vmd_remove_device", 00:06:05.085 "vmd_enable", 00:06:05.085 "sock_get_default_impl", 00:06:05.085 "sock_set_default_impl", 00:06:05.085 "sock_impl_set_options", 00:06:05.085 "sock_impl_get_options", 00:06:05.085 "iobuf_get_stats", 00:06:05.085 "iobuf_set_options", 00:06:05.085 "keyring_get_keys", 00:06:05.085 "framework_get_pci_devices", 00:06:05.085 "framework_get_config", 00:06:05.085 "framework_get_subsystems", 00:06:05.085 "fsdev_set_opts", 00:06:05.085 "fsdev_get_opts", 00:06:05.085 "trace_get_info", 00:06:05.086 "trace_get_tpoint_group_mask", 00:06:05.086 "trace_disable_tpoint_group", 00:06:05.086 "trace_enable_tpoint_group", 00:06:05.086 "trace_clear_tpoint_mask", 00:06:05.086 "trace_set_tpoint_mask", 00:06:05.086 "notify_get_notifications", 00:06:05.086 "notify_get_types", 00:06:05.086 "spdk_get_version", 00:06:05.086 "rpc_get_methods" 00:06:05.086 ] 00:06:05.086 11:37:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.086 11:37:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:05.086 11:37:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70167 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70167 ']' 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70167 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70167 00:06:05.086 killing process with pid 70167 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70167' 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70167 00:06:05.086 11:37:18 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70167 00:06:05.345 ************************************ 00:06:05.345 END TEST spdkcli_tcp 00:06:05.345 ************************************ 00:06:05.345 00:06:05.345 real 0m1.610s 00:06:05.345 user 0m2.889s 00:06:05.345 sys 0m0.394s 00:06:05.345 11:37:18 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.345 11:37:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.612 11:37:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.612 11:37:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.612 11:37:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.612 11:37:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.612 ************************************ 00:06:05.612 START TEST dpdk_mem_utility 00:06:05.612 ************************************ 00:06:05.612 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:05.612 * Looking for test storage... 00:06:05.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:05.612 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.612 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.612 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.612 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.612 11:37:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.613 --rc genhtml_branch_coverage=1 00:06:05.613 --rc genhtml_function_coverage=1 00:06:05.613 --rc genhtml_legend=1 00:06:05.613 --rc geninfo_all_blocks=1 00:06:05.613 --rc geninfo_unexecuted_blocks=1 00:06:05.613 00:06:05.613 ' 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.613 --rc genhtml_branch_coverage=1 00:06:05.613 --rc genhtml_function_coverage=1 00:06:05.613 --rc genhtml_legend=1 00:06:05.613 --rc geninfo_all_blocks=1 00:06:05.613 --rc geninfo_unexecuted_blocks=1 00:06:05.613 00:06:05.613 ' 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.613 --rc genhtml_branch_coverage=1 00:06:05.613 --rc genhtml_function_coverage=1 00:06:05.613 --rc genhtml_legend=1 00:06:05.613 --rc geninfo_all_blocks=1 00:06:05.613 --rc geninfo_unexecuted_blocks=1 00:06:05.613 00:06:05.613 ' 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.613 --rc genhtml_branch_coverage=1 00:06:05.613 --rc genhtml_function_coverage=1 00:06:05.613 --rc genhtml_legend=1 00:06:05.613 --rc geninfo_all_blocks=1 00:06:05.613 --rc geninfo_unexecuted_blocks=1 00:06:05.613 00:06:05.613 ' 00:06:05.613 11:37:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:05.613 11:37:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70267 00:06:05.613 11:37:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70267 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70267 ']' 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.613 11:37:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.613 11:37:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.881 [2024-11-19 11:37:19.020043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:05.881 [2024-11-19 11:37:19.020164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70267 ] 00:06:05.881 [2024-11-19 11:37:19.152925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.881 [2024-11-19 11:37:19.187799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.446 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.446 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:06.447 11:37:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:06.447 11:37:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:06.447 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.447 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.447 { 00:06:06.706 "filename": "/tmp/spdk_mem_dump.txt" 00:06:06.706 } 00:06:06.706 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.706 11:37:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:06.706 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:06.706 1 heaps totaling size 860.000000 MiB 00:06:06.706 size: 860.000000 MiB heap id: 0 00:06:06.706 end heaps---------- 00:06:06.706 9 mempools totaling size 642.649841 MiB 00:06:06.706 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:06.706 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:06.706 size: 92.545471 MiB name: bdev_io_70267 00:06:06.706 size: 51.011292 MiB name: evtpool_70267 00:06:06.706 size: 50.003479 MiB name: msgpool_70267 00:06:06.706 size: 36.509338 MiB name: fsdev_io_70267 00:06:06.706 size: 21.763794 MiB name: PDU_Pool 00:06:06.706 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:06.706 size: 0.026123 MiB name: Session_Pool 00:06:06.706 end mempools------- 00:06:06.706 6 memzones totaling size 4.142822 MiB 00:06:06.706 size: 1.000366 MiB name: RG_ring_0_70267 00:06:06.706 size: 1.000366 MiB name: RG_ring_1_70267 00:06:06.706 size: 1.000366 MiB name: RG_ring_4_70267 00:06:06.706 size: 1.000366 MiB name: RG_ring_5_70267 00:06:06.706 size: 0.125366 MiB name: RG_ring_2_70267 00:06:06.706 size: 0.015991 MiB name: RG_ring_3_70267 00:06:06.706 end memzones------- 00:06:06.706 11:37:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:06.706 heap id: 0 total size: 860.000000 MiB number of busy elements: 307 number of free elements: 16 00:06:06.706 list of free elements. size: 13.936523 MiB 00:06:06.706 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:06.706 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:06.706 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:06.706 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:06.706 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:06.706 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:06.706 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:06.706 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:06.706 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:06.706 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:06.706 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:06.706 element at address: 0x200003e00000 with size: 0.487915 MiB 00:06:06.706 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:06.706 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:06.706 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:06.706 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:06.706 list of standard malloc elements. size: 199.266785 MiB 00:06:06.706 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:06.706 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:06.706 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:06.706 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:06.706 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:06.706 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:06.706 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:06.706 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:06.706 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:06.706 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:06.706 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:06.707 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:06.707 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:06.708 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:06.708 list of memzone associated elements. size: 646.796692 MiB 00:06:06.708 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:06.708 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:06.708 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:06.708 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:06.708 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:06.708 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70267_0 00:06:06.708 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:06.708 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70267_0 00:06:06.708 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:06.708 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70267_0 00:06:06.708 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:06.708 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70267_0 00:06:06.708 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:06.708 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:06.708 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:06.708 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:06.708 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:06.708 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70267 00:06:06.708 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:06.708 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70267 00:06:06.708 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:06.708 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70267 00:06:06.708 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:06.708 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:06.708 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:06.708 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:06.708 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:06.708 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:06.708 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:06.708 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:06.708 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:06.708 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70267 00:06:06.708 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:06.708 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70267 00:06:06.708 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:06.708 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70267 00:06:06.708 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:06.708 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70267 00:06:06.708 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:06.708 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70267 00:06:06.708 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:06.708 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70267 00:06:06.708 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:06.708 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:06.708 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:06.708 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:06.708 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:06.708 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:06.708 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:06.708 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70267 00:06:06.708 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:06.708 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:06.708 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:06.708 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:06.708 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:06.708 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70267 00:06:06.708 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:06.708 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:06.708 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:06.708 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70267 00:06:06.708 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:06.708 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70267 00:06:06.708 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:06.708 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70267 00:06:06.708 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:06.708 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:06.708 11:37:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:06.708 11:37:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70267 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70267 ']' 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70267 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70267 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.708 killing process with pid 70267 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70267' 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70267 00:06:06.708 11:37:19 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70267 00:06:06.967 00:06:06.967 real 0m1.485s 00:06:06.967 user 0m1.536s 00:06:06.967 sys 0m0.383s 00:06:06.967 11:37:20 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.967 11:37:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.967 ************************************ 00:06:06.967 END TEST dpdk_mem_utility 00:06:06.967 ************************************ 00:06:06.967 11:37:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:06.967 11:37:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.967 11:37:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.967 11:37:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.967 ************************************ 00:06:06.967 START TEST event 00:06:06.967 ************************************ 00:06:06.967 11:37:20 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:07.225 * Looking for test storage... 00:06:07.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.225 11:37:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.225 11:37:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.225 11:37:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.225 11:37:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.225 11:37:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.225 11:37:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.225 11:37:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.225 11:37:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.225 11:37:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.225 11:37:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.225 11:37:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.225 11:37:20 event -- scripts/common.sh@344 -- # case "$op" in 00:06:07.225 11:37:20 event -- scripts/common.sh@345 -- # : 1 00:06:07.225 11:37:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.225 11:37:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.225 11:37:20 event -- scripts/common.sh@365 -- # decimal 1 00:06:07.225 11:37:20 event -- scripts/common.sh@353 -- # local d=1 00:06:07.225 11:37:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.225 11:37:20 event -- scripts/common.sh@355 -- # echo 1 00:06:07.225 11:37:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.225 11:37:20 event -- scripts/common.sh@366 -- # decimal 2 00:06:07.225 11:37:20 event -- scripts/common.sh@353 -- # local d=2 00:06:07.225 11:37:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.225 11:37:20 event -- scripts/common.sh@355 -- # echo 2 00:06:07.225 11:37:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.225 11:37:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.225 11:37:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.225 11:37:20 event -- scripts/common.sh@368 -- # return 0 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.225 --rc genhtml_branch_coverage=1 00:06:07.225 --rc genhtml_function_coverage=1 00:06:07.225 --rc genhtml_legend=1 00:06:07.225 --rc geninfo_all_blocks=1 00:06:07.225 --rc geninfo_unexecuted_blocks=1 00:06:07.225 00:06:07.225 ' 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.225 --rc genhtml_branch_coverage=1 00:06:07.225 --rc genhtml_function_coverage=1 00:06:07.225 --rc genhtml_legend=1 00:06:07.225 --rc geninfo_all_blocks=1 00:06:07.225 --rc geninfo_unexecuted_blocks=1 00:06:07.225 00:06:07.225 ' 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.225 --rc genhtml_branch_coverage=1 00:06:07.225 --rc genhtml_function_coverage=1 00:06:07.225 --rc genhtml_legend=1 00:06:07.225 --rc geninfo_all_blocks=1 00:06:07.225 --rc geninfo_unexecuted_blocks=1 00:06:07.225 00:06:07.225 ' 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.225 --rc genhtml_branch_coverage=1 00:06:07.225 --rc genhtml_function_coverage=1 00:06:07.225 --rc genhtml_legend=1 00:06:07.225 --rc geninfo_all_blocks=1 00:06:07.225 --rc geninfo_unexecuted_blocks=1 00:06:07.225 00:06:07.225 ' 00:06:07.225 11:37:20 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:07.225 11:37:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:07.225 11:37:20 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:07.225 11:37:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.225 11:37:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.225 ************************************ 00:06:07.225 START TEST event_perf 00:06:07.225 ************************************ 00:06:07.225 11:37:20 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:07.225 Running I/O for 1 seconds...[2024-11-19 11:37:20.523715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:07.225 [2024-11-19 11:37:20.523841] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70348 ] 00:06:07.483 [2024-11-19 11:37:20.658984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.483 [2024-11-19 11:37:20.695923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.483 Running I/O for 1 seconds...[2024-11-19 11:37:20.696491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.483 [2024-11-19 11:37:20.696617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.483 [2024-11-19 11:37:20.696730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.417 00:06:08.417 lcore 0: 184015 00:06:08.417 lcore 1: 184014 00:06:08.417 lcore 2: 184014 00:06:08.417 lcore 3: 184015 00:06:08.417 done. 00:06:08.417 00:06:08.417 real 0m1.263s 00:06:08.417 user 0m4.070s 00:06:08.417 sys 0m0.073s 00:06:08.417 11:37:21 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.417 ************************************ 00:06:08.417 END TEST event_perf 00:06:08.417 ************************************ 00:06:08.417 11:37:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.417 11:37:21 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.417 11:37:21 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:08.417 11:37:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.417 11:37:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.417 ************************************ 00:06:08.417 START TEST event_reactor 00:06:08.417 ************************************ 00:06:08.417 11:37:21 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:08.675 [2024-11-19 11:37:21.847861] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:08.676 [2024-11-19 11:37:21.847974] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70382 ] 00:06:08.676 [2024-11-19 11:37:21.983381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.676 [2024-11-19 11:37:22.017800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.054 test_start 00:06:10.054 oneshot 00:06:10.054 tick 100 00:06:10.054 tick 100 00:06:10.054 tick 250 00:06:10.054 tick 100 00:06:10.054 tick 100 00:06:10.054 tick 250 00:06:10.054 tick 100 00:06:10.054 tick 500 00:06:10.054 tick 100 00:06:10.054 tick 100 00:06:10.054 tick 250 00:06:10.054 tick 100 00:06:10.054 tick 100 00:06:10.054 test_end 00:06:10.054 00:06:10.054 real 0m1.257s 00:06:10.054 user 0m1.090s 00:06:10.054 sys 0m0.059s 00:06:10.054 11:37:23 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.054 ************************************ 00:06:10.054 END TEST event_reactor 00:06:10.054 ************************************ 00:06:10.054 11:37:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:10.054 11:37:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.054 11:37:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:10.054 11:37:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.054 11:37:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.054 ************************************ 00:06:10.054 START TEST event_reactor_perf 00:06:10.054 ************************************ 00:06:10.054 11:37:23 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.054 [2024-11-19 11:37:23.166856] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.054 [2024-11-19 11:37:23.166982] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70418 ] 00:06:10.054 [2024-11-19 11:37:23.295292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.054 [2024-11-19 11:37:23.348972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.429 test_start 00:06:11.429 test_end 00:06:11.429 Performance: 308516 events per second 00:06:11.429 00:06:11.429 real 0m1.265s 00:06:11.429 user 0m1.091s 00:06:11.429 sys 0m0.066s 00:06:11.429 ************************************ 00:06:11.429 END TEST event_reactor_perf 00:06:11.429 ************************************ 00:06:11.429 11:37:24 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.429 11:37:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.429 11:37:24 event -- event/event.sh@49 -- # uname -s 00:06:11.429 11:37:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.429 11:37:24 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.429 11:37:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.429 11:37:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.429 11:37:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.429 ************************************ 00:06:11.429 START TEST event_scheduler 00:06:11.429 ************************************ 00:06:11.429 11:37:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.429 * Looking for test storage... 00:06:11.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:11.429 11:37:24 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.429 11:37:24 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.429 11:37:24 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.429 11:37:24 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.429 11:37:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.430 11:37:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.430 --rc genhtml_branch_coverage=1 00:06:11.430 --rc genhtml_function_coverage=1 00:06:11.430 --rc genhtml_legend=1 00:06:11.430 --rc geninfo_all_blocks=1 00:06:11.430 --rc geninfo_unexecuted_blocks=1 00:06:11.430 00:06:11.430 ' 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.430 --rc genhtml_branch_coverage=1 00:06:11.430 --rc genhtml_function_coverage=1 00:06:11.430 --rc genhtml_legend=1 00:06:11.430 --rc geninfo_all_blocks=1 00:06:11.430 --rc geninfo_unexecuted_blocks=1 00:06:11.430 00:06:11.430 ' 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.430 --rc genhtml_branch_coverage=1 00:06:11.430 --rc genhtml_function_coverage=1 00:06:11.430 --rc genhtml_legend=1 00:06:11.430 --rc geninfo_all_blocks=1 00:06:11.430 --rc geninfo_unexecuted_blocks=1 00:06:11.430 00:06:11.430 ' 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.430 --rc genhtml_branch_coverage=1 00:06:11.430 --rc genhtml_function_coverage=1 00:06:11.430 --rc genhtml_legend=1 00:06:11.430 --rc geninfo_all_blocks=1 00:06:11.430 --rc geninfo_unexecuted_blocks=1 00:06:11.430 00:06:11.430 ' 00:06:11.430 11:37:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.430 11:37:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70489 00:06:11.430 11:37:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.430 11:37:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70489 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70489 ']' 00:06:11.430 11:37:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.430 11:37:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.430 [2024-11-19 11:37:24.679368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.430 [2024-11-19 11:37:24.679499] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70489 ] 00:06:11.430 [2024-11-19 11:37:24.814421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.688 [2024-11-19 11:37:24.850454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.688 [2024-11-19 11:37:24.850666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.688 [2024-11-19 11:37:24.850915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.688 [2024-11-19 11:37:24.851005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.254 11:37:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.254 11:37:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:12.254 11:37:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.254 11:37:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.254 11:37:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.254 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.254 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.254 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.254 POWER: Cannot set governor of lcore 0 to performance 00:06:12.254 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.255 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.255 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:12.255 POWER: Unable to set Power Management Environment for lcore 0 00:06:12.255 [2024-11-19 11:37:25.532648] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:12.255 [2024-11-19 11:37:25.532670] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:12.255 [2024-11-19 11:37:25.532690] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.255 [2024-11-19 11:37:25.532705] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.255 [2024-11-19 11:37:25.532713] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.255 [2024-11-19 11:37:25.532722] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 [2024-11-19 11:37:25.590215] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 ************************************ 00:06:12.255 START TEST scheduler_create_thread 00:06:12.255 ************************************ 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 2 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 3 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 4 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 5 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 6 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.255 7 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.255 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.513 8 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.513 9 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.513 10 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.513 11:37:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.080 11:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.080 11:37:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.080 11:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.080 11:37:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.455 11:37:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.455 11:37:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:14.455 11:37:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:14.455 11:37:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.455 11:37:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.407 11:37:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.407 ************************************ 00:06:15.407 END TEST scheduler_create_thread 00:06:15.407 ************************************ 00:06:15.407 00:06:15.407 real 0m3.091s 00:06:15.407 user 0m0.012s 00:06:15.407 sys 0m0.005s 00:06:15.407 11:37:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.407 11:37:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.407 11:37:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.407 11:37:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70489 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70489 ']' 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70489 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70489 00:06:15.407 killing process with pid 70489 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70489' 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70489 00:06:15.407 11:37:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70489 00:06:15.986 [2024-11-19 11:37:29.077806] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.986 ************************************ 00:06:15.986 END TEST event_scheduler 00:06:15.986 ************************************ 00:06:15.986 00:06:15.986 real 0m4.805s 00:06:15.986 user 0m9.103s 00:06:15.986 sys 0m0.326s 00:06:15.986 11:37:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.986 11:37:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.986 11:37:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.986 11:37:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.986 11:37:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.986 11:37:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.986 11:37:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.986 ************************************ 00:06:15.986 START TEST app_repeat 00:06:15.986 ************************************ 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:15.986 Process app_repeat pid: 70589 00:06:15.986 spdk_app_start Round 0 00:06:15.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70589 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70589' 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70589 /var/tmp/spdk-nbd.sock 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70589 ']' 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.986 11:37:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.986 11:37:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.986 [2024-11-19 11:37:29.367621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:15.986 [2024-11-19 11:37:29.367738] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70589 ] 00:06:16.244 [2024-11-19 11:37:29.500897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.244 [2024-11-19 11:37:29.535629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.244 [2024-11-19 11:37:29.535695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.179 11:37:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.179 11:37:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:17.179 11:37:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.179 Malloc0 00:06:17.179 11:37:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.437 Malloc1 00:06:17.438 11:37:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.438 11:37:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.696 /dev/nbd0 00:06:17.696 11:37:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.696 11:37:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.696 1+0 records in 00:06:17.696 1+0 records out 00:06:17.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496315 s, 8.3 MB/s 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.696 11:37:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:17.696 11:37:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.696 11:37:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.696 11:37:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.954 /dev/nbd1 00:06:17.954 11:37:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.954 11:37:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.954 11:37:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:17.954 11:37:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.955 1+0 records in 00:06:17.955 1+0 records out 00:06:17.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0023152 s, 1.8 MB/s 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:17.955 11:37:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:17.955 11:37:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.955 11:37:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.955 11:37:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.955 11:37:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.955 11:37:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd0", 00:06:18.213 "bdev_name": "Malloc0" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd1", 00:06:18.213 "bdev_name": "Malloc1" 00:06:18.213 } 00:06:18.213 ]' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd0", 00:06:18.213 "bdev_name": "Malloc0" 00:06:18.213 }, 00:06:18.213 { 00:06:18.213 "nbd_device": "/dev/nbd1", 00:06:18.213 "bdev_name": "Malloc1" 00:06:18.213 } 00:06:18.213 ]' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.213 /dev/nbd1' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.213 /dev/nbd1' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.213 256+0 records in 00:06:18.213 256+0 records out 00:06:18.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564546 s, 186 MB/s 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.213 11:37:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.213 256+0 records in 00:06:18.213 256+0 records out 00:06:18.213 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141798 s, 73.9 MB/s 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.214 256+0 records in 00:06:18.214 256+0 records out 00:06:18.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198144 s, 52.9 MB/s 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.214 11:37:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.471 11:37:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.730 11:37:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.730 11:37:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.730 11:37:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.730 11:37:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.989 11:37:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.989 11:37:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.247 11:37:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.247 [2024-11-19 11:37:32.504257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.247 [2024-11-19 11:37:32.535527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.247 [2024-11-19 11:37:32.535723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.247 [2024-11-19 11:37:32.567428] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.247 [2024-11-19 11:37:32.567481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.532 spdk_app_start Round 1 00:06:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.533 11:37:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.533 11:37:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:22.533 11:37:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70589 /var/tmp/spdk-nbd.sock 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70589 ']' 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.533 11:37:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:22.533 11:37:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.533 Malloc0 00:06:22.533 11:37:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.790 Malloc1 00:06:22.790 11:37:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.790 11:37:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.048 /dev/nbd0 00:06:23.049 11:37:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.049 11:37:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.049 1+0 records in 00:06:23.049 1+0 records out 00:06:23.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282332 s, 14.5 MB/s 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:23.049 11:37:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:23.049 11:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.049 11:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.049 11:37:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.049 /dev/nbd1 00:06:23.049 11:37:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.308 1+0 records in 00:06:23.308 1+0 records out 00:06:23.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276456 s, 14.8 MB/s 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:23.308 11:37:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd0", 00:06:23.308 "bdev_name": "Malloc0" 00:06:23.308 }, 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd1", 00:06:23.308 "bdev_name": "Malloc1" 00:06:23.308 } 00:06:23.308 ]' 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd0", 00:06:23.308 "bdev_name": "Malloc0" 00:06:23.308 }, 00:06:23.308 { 00:06:23.308 "nbd_device": "/dev/nbd1", 00:06:23.308 "bdev_name": "Malloc1" 00:06:23.308 } 00:06:23.308 ]' 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.308 /dev/nbd1' 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.308 /dev/nbd1' 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.308 11:37:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.567 256+0 records in 00:06:23.567 256+0 records out 00:06:23.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790113 s, 133 MB/s 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.567 256+0 records in 00:06:23.567 256+0 records out 00:06:23.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159264 s, 65.8 MB/s 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.567 256+0 records in 00:06:23.567 256+0 records out 00:06:23.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165822 s, 63.2 MB/s 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.567 11:37:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.827 11:37:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.827 11:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.086 11:37:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.086 11:37:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.344 11:37:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.344 [2024-11-19 11:37:37.734037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.604 [2024-11-19 11:37:37.762220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.604 [2024-11-19 11:37:37.762314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.604 [2024-11-19 11:37:37.792500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.604 [2024-11-19 11:37:37.792658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.913 spdk_app_start Round 2 00:06:27.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.913 11:37:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.913 11:37:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.913 11:37:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70589 /var/tmp/spdk-nbd.sock 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70589 ']' 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.913 11:37:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:27.913 11:37:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.913 Malloc0 00:06:27.913 11:37:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.913 Malloc1 00:06:27.913 11:37:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.913 11:37:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.176 /dev/nbd0 00:06:28.176 11:37:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.176 11:37:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.176 1+0 records in 00:06:28.176 1+0 records out 00:06:28.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186691 s, 21.9 MB/s 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.176 11:37:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.176 11:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.176 11:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.176 11:37:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.449 /dev/nbd1 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.449 1+0 records in 00:06:28.449 1+0 records out 00:06:28.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304541 s, 13.4 MB/s 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.449 11:37:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.449 11:37:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.708 { 00:06:28.708 "nbd_device": "/dev/nbd0", 00:06:28.708 "bdev_name": "Malloc0" 00:06:28.708 }, 00:06:28.708 { 00:06:28.708 "nbd_device": "/dev/nbd1", 00:06:28.708 "bdev_name": "Malloc1" 00:06:28.708 } 00:06:28.708 ]' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.708 { 00:06:28.708 "nbd_device": "/dev/nbd0", 00:06:28.708 "bdev_name": "Malloc0" 00:06:28.708 }, 00:06:28.708 { 00:06:28.708 "nbd_device": "/dev/nbd1", 00:06:28.708 "bdev_name": "Malloc1" 00:06:28.708 } 00:06:28.708 ]' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.708 /dev/nbd1' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.708 /dev/nbd1' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.708 256+0 records in 00:06:28.708 256+0 records out 00:06:28.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00851564 s, 123 MB/s 00:06:28.708 11:37:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.709 11:37:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.709 256+0 records in 00:06:28.709 256+0 records out 00:06:28.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167076 s, 62.8 MB/s 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.709 256+0 records in 00:06:28.709 256+0 records out 00:06:28.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206005 s, 50.9 MB/s 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.709 11:37:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.967 11:37:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.225 11:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.484 11:37:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.484 11:37:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.742 11:37:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.742 [2024-11-19 11:37:43.021225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.742 [2024-11-19 11:37:43.049928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.742 [2024-11-19 11:37:43.050031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.742 [2024-11-19 11:37:43.078254] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.742 [2024-11-19 11:37:43.078299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.022 11:37:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70589 /var/tmp/spdk-nbd.sock 00:06:33.022 11:37:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70589 ']' 00:06:33.022 11:37:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.022 11:37:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.022 11:37:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.022 11:37:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.022 11:37:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:33.022 11:37:46 event.app_repeat -- event/event.sh@39 -- # killprocess 70589 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70589 ']' 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70589 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70589 00:06:33.022 killing process with pid 70589 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70589' 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70589 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70589 00:06:33.022 spdk_app_start is called in Round 0. 00:06:33.022 Shutdown signal received, stop current app iteration 00:06:33.022 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:33.022 spdk_app_start is called in Round 1. 00:06:33.022 Shutdown signal received, stop current app iteration 00:06:33.022 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:33.022 spdk_app_start is called in Round 2. 00:06:33.022 Shutdown signal received, stop current app iteration 00:06:33.022 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:33.022 spdk_app_start is called in Round 3. 00:06:33.022 Shutdown signal received, stop current app iteration 00:06:33.022 11:37:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.022 11:37:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:33.022 00:06:33.022 real 0m16.952s 00:06:33.022 user 0m37.863s 00:06:33.022 sys 0m2.085s 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.022 ************************************ 00:06:33.022 END TEST app_repeat 00:06:33.022 ************************************ 00:06:33.022 11:37:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.022 11:37:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.022 11:37:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.022 11:37:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.022 11:37:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.022 11:37:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.022 ************************************ 00:06:33.022 START TEST cpu_locks 00:06:33.022 ************************************ 00:06:33.022 11:37:46 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:33.022 * Looking for test storage... 00:06:33.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:33.022 11:37:46 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.022 11:37:46 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.022 11:37:46 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.281 11:37:46 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:33.281 11:37:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.282 11:37:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:33.282 11:37:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.282 11:37:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.282 11:37:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.282 11:37:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.282 --rc genhtml_branch_coverage=1 00:06:33.282 --rc genhtml_function_coverage=1 00:06:33.282 --rc genhtml_legend=1 00:06:33.282 --rc geninfo_all_blocks=1 00:06:33.282 --rc geninfo_unexecuted_blocks=1 00:06:33.282 00:06:33.282 ' 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.282 --rc genhtml_branch_coverage=1 00:06:33.282 --rc genhtml_function_coverage=1 00:06:33.282 --rc genhtml_legend=1 00:06:33.282 --rc geninfo_all_blocks=1 00:06:33.282 --rc geninfo_unexecuted_blocks=1 00:06:33.282 00:06:33.282 ' 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.282 --rc genhtml_branch_coverage=1 00:06:33.282 --rc genhtml_function_coverage=1 00:06:33.282 --rc genhtml_legend=1 00:06:33.282 --rc geninfo_all_blocks=1 00:06:33.282 --rc geninfo_unexecuted_blocks=1 00:06:33.282 00:06:33.282 ' 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.282 --rc genhtml_branch_coverage=1 00:06:33.282 --rc genhtml_function_coverage=1 00:06:33.282 --rc genhtml_legend=1 00:06:33.282 --rc geninfo_all_blocks=1 00:06:33.282 --rc geninfo_unexecuted_blocks=1 00:06:33.282 00:06:33.282 ' 00:06:33.282 11:37:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.282 11:37:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.282 11:37:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.282 11:37:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.282 11:37:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.282 ************************************ 00:06:33.282 START TEST default_locks 00:06:33.282 ************************************ 00:06:33.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71009 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71009 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71009 ']' 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.282 11:37:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.282 [2024-11-19 11:37:46.546737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:33.282 [2024-11-19 11:37:46.546851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71009 ] 00:06:33.282 [2024-11-19 11:37:46.679582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.540 [2024-11-19 11:37:46.712854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.107 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.107 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:34.107 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71009 00:06:34.107 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71009 00:06:34.107 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71009 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71009 ']' 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71009 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71009 00:06:34.365 killing process with pid 71009 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71009' 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71009 00:06:34.365 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71009 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71009 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71009 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71009 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71009 ']' 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.624 ERROR: process (pid: 71009) is no longer running 00:06:34.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71009) - No such process 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:34.624 ************************************ 00:06:34.624 END TEST default_locks 00:06:34.624 ************************************ 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.624 00:06:34.624 real 0m1.464s 00:06:34.624 user 0m1.497s 00:06:34.624 sys 0m0.439s 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.624 11:37:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.624 11:37:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:34.624 11:37:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.624 11:37:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.624 11:37:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.624 ************************************ 00:06:34.624 START TEST default_locks_via_rpc 00:06:34.624 ************************************ 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71061 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71061 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71061 ']' 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.624 11:37:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.882 [2024-11-19 11:37:48.053062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:34.882 [2024-11-19 11:37:48.053178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71061 ] 00:06:34.882 [2024-11-19 11:37:48.182653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.882 [2024-11-19 11:37:48.216512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71061 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71061 00:06:35.817 11:37:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71061 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71061 ']' 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71061 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71061 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.817 killing process with pid 71061 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71061' 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71061 00:06:35.817 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71061 00:06:36.075 00:06:36.075 real 0m1.376s 00:06:36.075 user 0m1.424s 00:06:36.075 sys 0m0.401s 00:06:36.075 ************************************ 00:06:36.075 END TEST default_locks_via_rpc 00:06:36.075 ************************************ 00:06:36.075 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.075 11:37:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.075 11:37:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.075 11:37:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.075 11:37:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.075 11:37:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.075 ************************************ 00:06:36.075 START TEST non_locking_app_on_locked_coremask 00:06:36.075 ************************************ 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71103 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71103 /var/tmp/spdk.sock 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71103 ']' 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.075 11:37:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.332 [2024-11-19 11:37:49.486873] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:36.332 [2024-11-19 11:37:49.486998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71103 ] 00:06:36.332 [2024-11-19 11:37:49.615385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.332 [2024-11-19 11:37:49.647619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71119 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71119 /var/tmp/spdk2.sock 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71119 ']' 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.266 11:37:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.266 [2024-11-19 11:37:50.391947] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:37.266 [2024-11-19 11:37:50.392065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71119 ] 00:06:37.266 [2024-11-19 11:37:50.532435] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.266 [2024-11-19 11:37:50.532481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.266 [2024-11-19 11:37:50.597575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71103 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71103 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71103 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71103 ']' 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71103 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71103 00:06:38.200 killing process with pid 71103 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71103' 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71103 00:06:38.200 11:37:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71103 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71119 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71119 ']' 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71119 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71119 00:06:38.766 killing process with pid 71119 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71119' 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71119 00:06:38.766 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71119 00:06:39.025 00:06:39.025 real 0m2.968s 00:06:39.025 user 0m3.297s 00:06:39.025 sys 0m0.745s 00:06:39.025 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.025 11:37:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.025 ************************************ 00:06:39.025 END TEST non_locking_app_on_locked_coremask 00:06:39.025 ************************************ 00:06:39.284 11:37:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:39.284 11:37:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.284 11:37:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.284 11:37:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.284 ************************************ 00:06:39.284 START TEST locking_app_on_unlocked_coremask 00:06:39.284 ************************************ 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71177 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71177 /var/tmp/spdk.sock 00:06:39.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71177 ']' 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.284 11:37:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.284 [2024-11-19 11:37:52.525999] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.284 [2024-11-19 11:37:52.526116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71177 ] 00:06:39.284 [2024-11-19 11:37:52.656305] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.284 [2024-11-19 11:37:52.656352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.284 [2024-11-19 11:37:52.689688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71193 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71193 /var/tmp/spdk2.sock 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71193 ']' 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.217 11:37:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.217 [2024-11-19 11:37:53.433431] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:40.217 [2024-11-19 11:37:53.433724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71193 ] 00:06:40.217 [2024-11-19 11:37:53.574433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.475 [2024-11-19 11:37:53.638321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.041 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.041 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.041 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71193 00:06:41.041 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71193 00:06:41.041 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71177 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71177 ']' 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71177 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71177 00:06:41.299 killing process with pid 71177 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.299 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.300 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71177' 00:06:41.300 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71177 00:06:41.300 11:37:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71177 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71193 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71193 ']' 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71193 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71193 00:06:41.865 killing process with pid 71193 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71193' 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71193 00:06:41.865 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71193 00:06:42.124 ************************************ 00:06:42.124 END TEST locking_app_on_unlocked_coremask 00:06:42.124 ************************************ 00:06:42.124 00:06:42.124 real 0m2.856s 00:06:42.124 user 0m3.194s 00:06:42.124 sys 0m0.744s 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.124 11:37:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:42.124 11:37:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.124 11:37:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.124 11:37:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.124 ************************************ 00:06:42.124 START TEST locking_app_on_locked_coremask 00:06:42.124 ************************************ 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71251 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71251 /var/tmp/spdk.sock 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71251 ']' 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.124 11:37:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.124 [2024-11-19 11:37:55.417470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.124 [2024-11-19 11:37:55.417586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71251 ] 00:06:42.381 [2024-11-19 11:37:55.550089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.381 [2024-11-19 11:37:55.578667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71267 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71267 /var/tmp/spdk2.sock 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71267 /var/tmp/spdk2.sock 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71267 /var/tmp/spdk2.sock 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71267 ']' 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.948 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.948 [2024-11-19 11:37:56.317574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.948 [2024-11-19 11:37:56.317861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:06:43.206 [2024-11-19 11:37:56.452184] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71251 has claimed it. 00:06:43.206 [2024-11-19 11:37:56.452231] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.773 ERROR: process (pid: 71267) is no longer running 00:06:43.773 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71267) - No such process 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71251 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.773 11:37:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71251 00:06:43.773 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71251 00:06:43.773 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71251 ']' 00:06:43.773 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71251 00:06:43.773 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.773 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71251 00:06:44.031 killing process with pid 71251 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71251' 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71251 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71251 00:06:44.031 ************************************ 00:06:44.031 END TEST locking_app_on_locked_coremask 00:06:44.031 ************************************ 00:06:44.031 00:06:44.031 real 0m2.081s 00:06:44.031 user 0m2.346s 00:06:44.031 sys 0m0.488s 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.031 11:37:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.289 11:37:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:44.289 11:37:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.289 11:37:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.289 11:37:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.289 ************************************ 00:06:44.289 START TEST locking_overlapped_coremask 00:06:44.289 ************************************ 00:06:44.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71309 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71309 /var/tmp/spdk.sock 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71309 ']' 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.289 11:37:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.289 [2024-11-19 11:37:57.545462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.290 [2024-11-19 11:37:57.545764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71309 ] 00:06:44.290 [2024-11-19 11:37:57.680307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.549 [2024-11-19 11:37:57.713353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.549 [2024-11-19 11:37:57.713482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.549 [2024-11-19 11:37:57.713494] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71327 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71327 /var/tmp/spdk2.sock 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71327 /var/tmp/spdk2.sock 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71327 /var/tmp/spdk2.sock 00:06:45.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71327 ']' 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.118 11:37:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.118 [2024-11-19 11:37:58.469204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:45.119 [2024-11-19 11:37:58.469319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:06:45.377 [2024-11-19 11:37:58.609995] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71309 has claimed it. 00:06:45.377 [2024-11-19 11:37:58.610061] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.683 ERROR: process (pid: 71327) is no longer running 00:06:45.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71327) - No such process 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71309 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71309 ']' 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71309 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.683 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71309 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.941 killing process with pid 71309 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71309' 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71309 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71309 00:06:45.941 00:06:45.941 real 0m1.846s 00:06:45.941 user 0m5.095s 00:06:45.941 sys 0m0.372s 00:06:45.941 ************************************ 00:06:45.941 END TEST locking_overlapped_coremask 00:06:45.941 ************************************ 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.941 11:37:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.199 11:37:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:46.199 11:37:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.199 11:37:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.199 11:37:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.199 ************************************ 00:06:46.199 START TEST locking_overlapped_coremask_via_rpc 00:06:46.199 ************************************ 00:06:46.199 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:46.199 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71369 00:06:46.199 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71369 /var/tmp/spdk.sock 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71369 ']' 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:46.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.200 11:37:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.200 [2024-11-19 11:37:59.445803] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.200 [2024-11-19 11:37:59.445928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71369 ] 00:06:46.200 [2024-11-19 11:37:59.578300] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.200 [2024-11-19 11:37:59.578343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.458 [2024-11-19 11:37:59.609816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.458 [2024-11-19 11:37:59.610044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.458 [2024-11-19 11:37:59.610052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71387 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71387 /var/tmp/spdk2.sock 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71387 ']' 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.024 11:38:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.024 [2024-11-19 11:38:00.382344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:47.024 [2024-11-19 11:38:00.382649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71387 ] 00:06:47.281 [2024-11-19 11:38:00.522135] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.281 [2024-11-19 11:38:00.522188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.281 [2024-11-19 11:38:00.594526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.281 [2024-11-19 11:38:00.597587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.281 [2024-11-19 11:38:00.597655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.850 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:48.108 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.108 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:48.108 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.108 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.108 [2024-11-19 11:38:01.262554] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71369 has claimed it. 00:06:48.109 request: 00:06:48.109 { 00:06:48.109 "method": "framework_enable_cpumask_locks", 00:06:48.109 "req_id": 1 00:06:48.109 } 00:06:48.109 Got JSON-RPC error response 00:06:48.109 response: 00:06:48.109 { 00:06:48.109 "code": -32603, 00:06:48.109 "message": "Failed to claim CPU core: 2" 00:06:48.109 } 00:06:48.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71369 /var/tmp/spdk.sock 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71369 ']' 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71387 /var/tmp/spdk2.sock 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71387 ']' 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.109 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.366 ************************************ 00:06:48.366 END TEST locking_overlapped_coremask_via_rpc 00:06:48.366 ************************************ 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.366 00:06:48.366 real 0m2.306s 00:06:48.366 user 0m1.090s 00:06:48.366 sys 0m0.139s 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.366 11:38:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.366 11:38:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:48.366 11:38:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71369 ]] 00:06:48.366 11:38:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71369 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71369 ']' 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71369 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71369 00:06:48.366 killing process with pid 71369 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71369' 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71369 00:06:48.366 11:38:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71369 00:06:48.624 11:38:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71387 ]] 00:06:48.624 11:38:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71387 00:06:48.624 11:38:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71387 ']' 00:06:48.624 11:38:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71387 00:06:48.624 11:38:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71387 00:06:48.624 killing process with pid 71387 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71387' 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71387 00:06:48.624 11:38:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71387 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.192 Process with pid 71369 is not found 00:06:49.192 Process with pid 71387 is not found 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71369 ]] 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71369 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71369 ']' 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71369 00:06:49.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71369) - No such process 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71369 is not found' 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71387 ]] 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71387 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71387 ']' 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71387 00:06:49.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71387) - No such process 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71387 is not found' 00:06:49.192 11:38:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.192 ************************************ 00:06:49.192 END TEST cpu_locks 00:06:49.192 ************************************ 00:06:49.192 00:06:49.192 real 0m15.978s 00:06:49.192 user 0m28.396s 00:06:49.192 sys 0m4.055s 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.192 11:38:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 ************************************ 00:06:49.192 END TEST event 00:06:49.192 ************************************ 00:06:49.192 00:06:49.192 real 0m42.020s 00:06:49.192 user 1m21.793s 00:06:49.192 sys 0m6.890s 00:06:49.192 11:38:02 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.192 11:38:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 11:38:02 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:49.192 11:38:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.192 11:38:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.192 11:38:02 -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 ************************************ 00:06:49.192 START TEST thread 00:06:49.192 ************************************ 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:49.192 * Looking for test storage... 00:06:49.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.192 11:38:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.192 11:38:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.192 11:38:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.192 11:38:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.192 11:38:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.192 11:38:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.192 11:38:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.192 11:38:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.192 11:38:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.192 11:38:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.192 11:38:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.192 11:38:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:49.192 11:38:02 thread -- scripts/common.sh@345 -- # : 1 00:06:49.192 11:38:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.192 11:38:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.192 11:38:02 thread -- scripts/common.sh@365 -- # decimal 1 00:06:49.192 11:38:02 thread -- scripts/common.sh@353 -- # local d=1 00:06:49.192 11:38:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.192 11:38:02 thread -- scripts/common.sh@355 -- # echo 1 00:06:49.192 11:38:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.192 11:38:02 thread -- scripts/common.sh@366 -- # decimal 2 00:06:49.192 11:38:02 thread -- scripts/common.sh@353 -- # local d=2 00:06:49.192 11:38:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.192 11:38:02 thread -- scripts/common.sh@355 -- # echo 2 00:06:49.192 11:38:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.192 11:38:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.192 11:38:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.192 11:38:02 thread -- scripts/common.sh@368 -- # return 0 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.192 --rc genhtml_branch_coverage=1 00:06:49.192 --rc genhtml_function_coverage=1 00:06:49.192 --rc genhtml_legend=1 00:06:49.192 --rc geninfo_all_blocks=1 00:06:49.192 --rc geninfo_unexecuted_blocks=1 00:06:49.192 00:06:49.192 ' 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.192 --rc genhtml_branch_coverage=1 00:06:49.192 --rc genhtml_function_coverage=1 00:06:49.192 --rc genhtml_legend=1 00:06:49.192 --rc geninfo_all_blocks=1 00:06:49.192 --rc geninfo_unexecuted_blocks=1 00:06:49.192 00:06:49.192 ' 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.192 --rc genhtml_branch_coverage=1 00:06:49.192 --rc genhtml_function_coverage=1 00:06:49.192 --rc genhtml_legend=1 00:06:49.192 --rc geninfo_all_blocks=1 00:06:49.192 --rc geninfo_unexecuted_blocks=1 00:06:49.192 00:06:49.192 ' 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.192 --rc genhtml_branch_coverage=1 00:06:49.192 --rc genhtml_function_coverage=1 00:06:49.192 --rc genhtml_legend=1 00:06:49.192 --rc geninfo_all_blocks=1 00:06:49.192 --rc geninfo_unexecuted_blocks=1 00:06:49.192 00:06:49.192 ' 00:06:49.192 11:38:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.192 11:38:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.192 ************************************ 00:06:49.192 START TEST thread_poller_perf 00:06:49.192 ************************************ 00:06:49.192 11:38:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.192 [2024-11-19 11:38:02.577752] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.192 [2024-11-19 11:38:02.577901] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71514 ] 00:06:49.451 [2024-11-19 11:38:02.716528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.451 [2024-11-19 11:38:02.750394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.451 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:50.825 [2024-11-19T11:38:04.237Z] ====================================== 00:06:50.825 [2024-11-19T11:38:04.237Z] busy:2612590444 (cyc) 00:06:50.825 [2024-11-19T11:38:04.237Z] total_run_count: 306000 00:06:50.825 [2024-11-19T11:38:04.237Z] tsc_hz: 2600000000 (cyc) 00:06:50.825 [2024-11-19T11:38:04.237Z] ====================================== 00:06:50.825 [2024-11-19T11:38:04.237Z] poller_cost: 8537 (cyc), 3283 (nsec) 00:06:50.825 00:06:50.825 real 0m1.269s 00:06:50.825 user 0m1.101s 00:06:50.825 sys 0m0.060s 00:06:50.825 11:38:03 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.825 11:38:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 ************************************ 00:06:50.825 END TEST thread_poller_perf 00:06:50.825 ************************************ 00:06:50.825 11:38:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:50.825 11:38:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:50.825 11:38:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.825 11:38:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.825 ************************************ 00:06:50.825 START TEST thread_poller_perf 00:06:50.825 ************************************ 00:06:50.825 11:38:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:50.825 [2024-11-19 11:38:03.909925] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:50.825 [2024-11-19 11:38:03.910154] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71545 ] 00:06:50.825 [2024-11-19 11:38:04.044011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.825 [2024-11-19 11:38:04.077604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.825 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:51.760 [2024-11-19T11:38:05.172Z] ====================================== 00:06:51.760 [2024-11-19T11:38:05.172Z] busy:2603268182 (cyc) 00:06:51.760 [2024-11-19T11:38:05.172Z] total_run_count: 3971000 00:06:51.760 [2024-11-19T11:38:05.172Z] tsc_hz: 2600000000 (cyc) 00:06:51.760 [2024-11-19T11:38:05.172Z] ====================================== 00:06:51.760 [2024-11-19T11:38:05.172Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:51.760 00:06:51.760 real 0m1.254s 00:06:51.760 user 0m1.091s 00:06:51.760 sys 0m0.056s 00:06:51.760 11:38:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.760 ************************************ 00:06:51.760 END TEST thread_poller_perf 00:06:51.760 11:38:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.760 ************************************ 00:06:52.019 11:38:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:52.019 ************************************ 00:06:52.019 END TEST thread 00:06:52.019 ************************************ 00:06:52.019 00:06:52.019 real 0m2.793s 00:06:52.019 user 0m2.305s 00:06:52.019 sys 0m0.228s 00:06:52.019 11:38:05 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.019 11:38:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 11:38:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:52.019 11:38:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:52.019 11:38:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.019 11:38:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.019 11:38:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.019 ************************************ 00:06:52.019 START TEST app_cmdline 00:06:52.019 ************************************ 00:06:52.019 11:38:05 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:52.019 * Looking for test storage... 00:06:52.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:52.019 11:38:05 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:52.019 11:38:05 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:52.019 11:38:05 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:52.019 11:38:05 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.019 11:38:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:52.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.020 11:38:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.020 11:38:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.020 11:38:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.020 11:38:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.020 --rc genhtml_branch_coverage=1 00:06:52.020 --rc genhtml_function_coverage=1 00:06:52.020 --rc genhtml_legend=1 00:06:52.020 --rc geninfo_all_blocks=1 00:06:52.020 --rc geninfo_unexecuted_blocks=1 00:06:52.020 00:06:52.020 ' 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.020 --rc genhtml_branch_coverage=1 00:06:52.020 --rc genhtml_function_coverage=1 00:06:52.020 --rc genhtml_legend=1 00:06:52.020 --rc geninfo_all_blocks=1 00:06:52.020 --rc geninfo_unexecuted_blocks=1 00:06:52.020 00:06:52.020 ' 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.020 --rc genhtml_branch_coverage=1 00:06:52.020 --rc genhtml_function_coverage=1 00:06:52.020 --rc genhtml_legend=1 00:06:52.020 --rc geninfo_all_blocks=1 00:06:52.020 --rc geninfo_unexecuted_blocks=1 00:06:52.020 00:06:52.020 ' 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.020 --rc genhtml_branch_coverage=1 00:06:52.020 --rc genhtml_function_coverage=1 00:06:52.020 --rc genhtml_legend=1 00:06:52.020 --rc geninfo_all_blocks=1 00:06:52.020 --rc geninfo_unexecuted_blocks=1 00:06:52.020 00:06:52.020 ' 00:06:52.020 11:38:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:52.020 11:38:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71634 00:06:52.020 11:38:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71634 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71634 ']' 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.020 11:38:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:52.020 11:38:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.278 [2024-11-19 11:38:05.448143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:52.278 [2024-11-19 11:38:05.448417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71634 ] 00:06:52.278 [2024-11-19 11:38:05.584263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.278 [2024-11-19 11:38:05.617432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:53.214 { 00:06:53.214 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:53.214 "fields": { 00:06:53.214 "major": 24, 00:06:53.214 "minor": 9, 00:06:53.214 "patch": 1, 00:06:53.214 "suffix": "-pre", 00:06:53.214 "commit": "b18e1bd62" 00:06:53.214 } 00:06:53.214 } 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:53.214 11:38:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:53.214 11:38:06 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:53.473 request: 00:06:53.473 { 00:06:53.473 "method": "env_dpdk_get_mem_stats", 00:06:53.473 "req_id": 1 00:06:53.473 } 00:06:53.473 Got JSON-RPC error response 00:06:53.473 response: 00:06:53.473 { 00:06:53.473 "code": -32601, 00:06:53.473 "message": "Method not found" 00:06:53.473 } 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.473 11:38:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71634 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71634 ']' 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71634 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71634 00:06:53.473 killing process with pid 71634 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71634' 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@969 -- # kill 71634 00:06:53.473 11:38:06 app_cmdline -- common/autotest_common.sh@974 -- # wait 71634 00:06:53.731 ************************************ 00:06:53.732 END TEST app_cmdline 00:06:53.732 ************************************ 00:06:53.732 00:06:53.732 real 0m1.767s 00:06:53.732 user 0m2.137s 00:06:53.732 sys 0m0.359s 00:06:53.732 11:38:07 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.732 11:38:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 11:38:07 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.732 11:38:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.732 11:38:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.732 11:38:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.732 ************************************ 00:06:53.732 START TEST version 00:06:53.732 ************************************ 00:06:53.732 11:38:07 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:53.732 * Looking for test storage... 00:06:53.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.732 11:38:07 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.732 11:38:07 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.732 11:38:07 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.990 11:38:07 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.990 11:38:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.990 11:38:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.990 11:38:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.990 11:38:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.990 11:38:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.990 11:38:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.990 11:38:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.990 11:38:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.990 11:38:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.990 11:38:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.990 11:38:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.990 11:38:07 version -- scripts/common.sh@344 -- # case "$op" in 00:06:53.990 11:38:07 version -- scripts/common.sh@345 -- # : 1 00:06:53.990 11:38:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.990 11:38:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.990 11:38:07 version -- scripts/common.sh@365 -- # decimal 1 00:06:53.990 11:38:07 version -- scripts/common.sh@353 -- # local d=1 00:06:53.990 11:38:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.990 11:38:07 version -- scripts/common.sh@355 -- # echo 1 00:06:53.990 11:38:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.990 11:38:07 version -- scripts/common.sh@366 -- # decimal 2 00:06:53.990 11:38:07 version -- scripts/common.sh@353 -- # local d=2 00:06:53.990 11:38:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.990 11:38:07 version -- scripts/common.sh@355 -- # echo 2 00:06:53.990 11:38:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.990 11:38:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.990 11:38:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.990 11:38:07 version -- scripts/common.sh@368 -- # return 0 00:06:53.990 11:38:07 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.990 11:38:07 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.990 --rc genhtml_branch_coverage=1 00:06:53.990 --rc genhtml_function_coverage=1 00:06:53.990 --rc genhtml_legend=1 00:06:53.990 --rc geninfo_all_blocks=1 00:06:53.990 --rc geninfo_unexecuted_blocks=1 00:06:53.990 00:06:53.990 ' 00:06:53.990 11:38:07 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.990 --rc genhtml_branch_coverage=1 00:06:53.990 --rc genhtml_function_coverage=1 00:06:53.990 --rc genhtml_legend=1 00:06:53.990 --rc geninfo_all_blocks=1 00:06:53.990 --rc geninfo_unexecuted_blocks=1 00:06:53.990 00:06:53.990 ' 00:06:53.990 11:38:07 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.990 --rc genhtml_branch_coverage=1 00:06:53.991 --rc genhtml_function_coverage=1 00:06:53.991 --rc genhtml_legend=1 00:06:53.991 --rc geninfo_all_blocks=1 00:06:53.991 --rc geninfo_unexecuted_blocks=1 00:06:53.991 00:06:53.991 ' 00:06:53.991 11:38:07 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.991 --rc genhtml_branch_coverage=1 00:06:53.991 --rc genhtml_function_coverage=1 00:06:53.991 --rc genhtml_legend=1 00:06:53.991 --rc geninfo_all_blocks=1 00:06:53.991 --rc geninfo_unexecuted_blocks=1 00:06:53.991 00:06:53.991 ' 00:06:53.991 11:38:07 version -- app/version.sh@17 -- # get_header_version major 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # cut -f2 00:06:53.991 11:38:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.991 11:38:07 version -- app/version.sh@17 -- # major=24 00:06:53.991 11:38:07 version -- app/version.sh@18 -- # get_header_version minor 00:06:53.991 11:38:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # cut -f2 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.991 11:38:07 version -- app/version.sh@18 -- # minor=9 00:06:53.991 11:38:07 version -- app/version.sh@19 -- # get_header_version patch 00:06:53.991 11:38:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # cut -f2 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.991 11:38:07 version -- app/version.sh@19 -- # patch=1 00:06:53.991 11:38:07 version -- app/version.sh@20 -- # get_header_version suffix 00:06:53.991 11:38:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # cut -f2 00:06:53.991 11:38:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:53.991 11:38:07 version -- app/version.sh@20 -- # suffix=-pre 00:06:53.991 11:38:07 version -- app/version.sh@22 -- # version=24.9 00:06:53.991 11:38:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:53.991 11:38:07 version -- app/version.sh@25 -- # version=24.9.1 00:06:53.991 11:38:07 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:53.991 11:38:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:53.991 11:38:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:53.991 11:38:07 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:53.991 11:38:07 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:53.991 00:06:53.991 real 0m0.189s 00:06:53.991 user 0m0.105s 00:06:53.991 sys 0m0.107s 00:06:53.991 ************************************ 00:06:53.991 END TEST version 00:06:53.991 ************************************ 00:06:53.991 11:38:07 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.991 11:38:07 version -- common/autotest_common.sh@10 -- # set +x 00:06:53.991 11:38:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:53.991 11:38:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:53.991 11:38:07 -- spdk/autotest.sh@194 -- # uname -s 00:06:53.991 11:38:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:53.991 11:38:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:53.991 11:38:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:53.991 11:38:07 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:53.991 11:38:07 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:53.991 11:38:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.991 11:38:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.991 11:38:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.991 ************************************ 00:06:53.991 START TEST blockdev_nvme 00:06:53.991 ************************************ 00:06:53.991 11:38:07 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:53.991 * Looking for test storage... 00:06:53.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:53.991 11:38:07 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.991 11:38:07 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.991 11:38:07 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.251 11:38:07 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.251 --rc genhtml_branch_coverage=1 00:06:54.251 --rc genhtml_function_coverage=1 00:06:54.251 --rc genhtml_legend=1 00:06:54.251 --rc geninfo_all_blocks=1 00:06:54.251 --rc geninfo_unexecuted_blocks=1 00:06:54.251 00:06:54.251 ' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.251 --rc genhtml_branch_coverage=1 00:06:54.251 --rc genhtml_function_coverage=1 00:06:54.251 --rc genhtml_legend=1 00:06:54.251 --rc geninfo_all_blocks=1 00:06:54.251 --rc geninfo_unexecuted_blocks=1 00:06:54.251 00:06:54.251 ' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.251 --rc genhtml_branch_coverage=1 00:06:54.251 --rc genhtml_function_coverage=1 00:06:54.251 --rc genhtml_legend=1 00:06:54.251 --rc geninfo_all_blocks=1 00:06:54.251 --rc geninfo_unexecuted_blocks=1 00:06:54.251 00:06:54.251 ' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.251 --rc genhtml_branch_coverage=1 00:06:54.251 --rc genhtml_function_coverage=1 00:06:54.251 --rc genhtml_legend=1 00:06:54.251 --rc geninfo_all_blocks=1 00:06:54.251 --rc geninfo_unexecuted_blocks=1 00:06:54.251 00:06:54.251 ' 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:54.251 11:38:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:54.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71795 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71795 00:06:54.251 11:38:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 71795 ']' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.251 11:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.251 [2024-11-19 11:38:07.487285] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:54.251 [2024-11-19 11:38:07.487431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71795 ] 00:06:54.251 [2024-11-19 11:38:07.620875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.251 [2024-11-19 11:38:07.653484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.220 11:38:08 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.220 11:38:08 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:55.220 11:38:08 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:55.220 11:38:08 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:55.220 11:38:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:55.220 11:38:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:55.220 11:38:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:55.220 11:38:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:55.220 11:38:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.220 11:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.480 11:38:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:55.480 11:38:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:55.481 11:38:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d8df93e1-c9d9-49b3-8a30-330a59a7a61f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d8df93e1-c9d9-49b3-8a30-330a59a7a61f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "05bab829-9eec-47d3-9394-152b5369eee7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "05bab829-9eec-47d3-9394-152b5369eee7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "226345fb-41d3-40df-ae5e-bb5225c73a0d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "226345fb-41d3-40df-ae5e-bb5225c73a0d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9c73ccb8-ba14-42a6-9acd-8337633cf8ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c73ccb8-ba14-42a6-9acd-8337633cf8ed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "83a7e741-fd2e-489c-a14a-c5a1b5e5a3d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "83a7e741-fd2e-489c-a14a-c5a1b5e5a3d6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "415e9424-35db-45fb-ae74-18e7f89a135e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "415e9424-35db-45fb-ae74-18e7f89a135e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:55.481 11:38:08 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:55.481 11:38:08 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:55.481 11:38:08 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:55.481 11:38:08 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71795 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 71795 ']' 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 71795 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71795 00:06:55.481 killing process with pid 71795 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71795' 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 71795 00:06:55.481 11:38:08 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 71795 00:06:56.051 11:38:09 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:56.051 11:38:09 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.051 11:38:09 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:56.051 11:38:09 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.051 11:38:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.051 ************************************ 00:06:56.051 START TEST bdev_hello_world 00:06:56.051 ************************************ 00:06:56.051 11:38:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:56.051 [2024-11-19 11:38:09.270705] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:56.051 [2024-11-19 11:38:09.270870] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71857 ] 00:06:56.051 [2024-11-19 11:38:09.408430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.309 [2024-11-19 11:38:09.472571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.569 [2024-11-19 11:38:09.883845] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:56.569 [2024-11-19 11:38:09.883900] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:56.569 [2024-11-19 11:38:09.883928] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:56.569 [2024-11-19 11:38:09.886016] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:56.569 [2024-11-19 11:38:09.887185] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:56.569 [2024-11-19 11:38:09.887237] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:56.569 [2024-11-19 11:38:09.887889] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:56.569 00:06:56.569 [2024-11-19 11:38:09.887921] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:56.831 00:06:56.831 real 0m0.848s 00:06:56.831 user 0m0.535s 00:06:56.831 sys 0m0.206s 00:06:56.831 11:38:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.831 ************************************ 00:06:56.831 END TEST bdev_hello_world 00:06:56.831 ************************************ 00:06:56.831 11:38:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:56.831 11:38:10 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:56.831 11:38:10 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.831 11:38:10 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.831 11:38:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.831 ************************************ 00:06:56.831 START TEST bdev_bounds 00:06:56.831 ************************************ 00:06:56.831 Process bdevio pid: 71888 00:06:56.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71888 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71888' 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71888 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71888 ']' 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:56.831 11:38:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.831 [2024-11-19 11:38:10.174472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:56.831 [2024-11-19 11:38:10.174785] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71888 ] 00:06:57.092 [2024-11-19 11:38:10.312514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.092 [2024-11-19 11:38:10.352401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.092 [2024-11-19 11:38:10.352591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.092 [2024-11-19 11:38:10.352649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.667 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.667 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:57.667 11:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:57.929 I/O targets: 00:06:57.929 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:57.929 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:57.929 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.929 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.929 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:57.929 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:57.929 00:06:57.929 00:06:57.929 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.929 http://cunit.sourceforge.net/ 00:06:57.929 00:06:57.929 00:06:57.929 Suite: bdevio tests on: Nvme3n1 00:06:57.929 Test: blockdev write read block ...passed 00:06:57.929 Test: blockdev write zeroes read block ...passed 00:06:57.929 Test: blockdev write zeroes read no split ...passed 00:06:57.929 Test: blockdev write zeroes read split ...passed 00:06:57.929 Test: blockdev write zeroes read split partial ...passed 00:06:57.929 Test: blockdev reset ...[2024-11-19 11:38:11.167577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:57.929 passed 00:06:57.929 Test: blockdev write read 8 blocks ...[2024-11-19 11:38:11.171165] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.929 passed 00:06:57.929 Test: blockdev write read size > 128k ...passed 00:06:57.929 Test: blockdev write read invalid size ...passed 00:06:57.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.929 Test: blockdev write read max offset ...passed 00:06:57.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.929 Test: blockdev writev readv 8 blocks ...passed 00:06:57.929 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.929 Test: blockdev writev readv block ...passed 00:06:57.929 Test: blockdev writev readv size > 128k ...passed 00:06:57.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.929 Test: blockdev comparev and writev ...[2024-11-19 11:38:11.188562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c980a000 len:0x1000 00:06:57.929 [2024-11-19 11:38:11.188637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.929 passed 00:06:57.929 Test: blockdev nvme passthru rw ...passed 00:06:57.929 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:38:11.191126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.929 [2024-11-19 11:38:11.191180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.929 passed 00:06:57.929 Test: blockdev nvme admin passthru ...passed 00:06:57.929 Test: blockdev copy ...passed 00:06:57.929 Suite: bdevio tests on: Nvme2n3 00:06:57.929 Test: blockdev write read block ...passed 00:06:57.929 Test: blockdev write zeroes read block ...passed 00:06:57.929 Test: blockdev write zeroes read no split ...passed 00:06:57.929 Test: blockdev write zeroes read split ...passed 00:06:57.929 Test: blockdev write zeroes read split partial ...passed 00:06:57.929 Test: blockdev reset ...[2024-11-19 11:38:11.221273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:57.929 passed 00:06:57.929 Test: blockdev write read 8 blocks ...[2024-11-19 11:38:11.225186] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.930 passed 00:06:57.930 Test: blockdev write read size > 128k ...passed 00:06:57.930 Test: blockdev write read invalid size ...passed 00:06:57.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.930 Test: blockdev write read max offset ...passed 00:06:57.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.930 Test: blockdev writev readv 8 blocks ...passed 00:06:57.930 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.930 Test: blockdev writev readv block ...passed 00:06:57.930 Test: blockdev writev readv size > 128k ...passed 00:06:57.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.930 Test: blockdev comparev and writev ...[2024-11-19 11:38:11.241924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9803000 len:0x1000 00:06:57.930 [2024-11-19 11:38:11.241981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.930 passed 00:06:57.930 Test: blockdev nvme passthru rw ...passed 00:06:57.930 Test: blockdev nvme passthru vendor specific ...passed 00:06:57.930 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:11.244648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.930 [2024-11-19 11:38:11.244692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.930 passed 00:06:57.930 Test: blockdev copy ...passed 00:06:57.930 Suite: bdevio tests on: Nvme2n2 00:06:57.930 Test: blockdev write read block ...passed 00:06:57.930 Test: blockdev write zeroes read block ...passed 00:06:57.930 Test: blockdev write zeroes read no split ...passed 00:06:57.930 Test: blockdev write zeroes read split ...passed 00:06:57.930 Test: blockdev write zeroes read split partial ...passed 00:06:57.930 Test: blockdev reset ...[2024-11-19 11:38:11.275098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:57.930 [2024-11-19 11:38:11.278548] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.930 passed 00:06:57.930 Test: blockdev write read 8 blocks ...passed 00:06:57.930 Test: blockdev write read size > 128k ...passed 00:06:57.930 Test: blockdev write read invalid size ...passed 00:06:57.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.930 Test: blockdev write read max offset ...passed 00:06:57.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:57.930 Test: blockdev writev readv 8 blocks ...passed 00:06:57.930 Test: blockdev writev readv 30 x 1block ...passed 00:06:57.930 Test: blockdev writev readv block ...passed 00:06:57.930 Test: blockdev writev readv size > 128k ...passed 00:06:57.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:57.930 Test: blockdev comparev and writev ...[2024-11-19 11:38:11.294699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9803000 len:0x1000 00:06:57.930 [2024-11-19 11:38:11.294876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:57.930 passed 00:06:57.930 Test: blockdev nvme passthru rw ...passed 00:06:57.930 Test: blockdev nvme passthru vendor specific ...passed 00:06:57.930 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:11.297715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:57.930 [2024-11-19 11:38:11.297758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:57.930 passed 00:06:57.930 Test: blockdev copy ...passed 00:06:57.930 Suite: bdevio tests on: Nvme2n1 00:06:57.930 Test: blockdev write read block ...passed 00:06:57.930 Test: blockdev write zeroes read block ...passed 00:06:57.930 Test: blockdev write zeroes read no split ...passed 00:06:57.930 Test: blockdev write zeroes read split ...passed 00:06:57.930 Test: blockdev write zeroes read split partial ...passed 00:06:57.930 Test: blockdev reset ...[2024-11-19 11:38:11.328076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:57.930 [2024-11-19 11:38:11.331479] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:57.930 passed 00:06:57.930 Test: blockdev write read 8 blocks ...passed 00:06:57.930 Test: blockdev write read size > 128k ...passed 00:06:57.930 Test: blockdev write read invalid size ...passed 00:06:57.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:57.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:57.930 Test: blockdev write read max offset ...passed 00:06:58.192 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.192 Test: blockdev writev readv 8 blocks ...passed 00:06:58.192 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.192 Test: blockdev writev readv block ...passed 00:06:58.192 Test: blockdev writev readv size > 128k ...passed 00:06:58.192 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.192 Test: blockdev comparev and writev ...[2024-11-19 11:38:11.348622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:58.192 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c9803000 len:0x1000 00:06:58.192 [2024-11-19 11:38:11.348785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.192 passed 00:06:58.192 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:38:11.351427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.192 [2024-11-19 11:38:11.351475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.192 passed 00:06:58.192 Test: blockdev nvme admin passthru ...passed 00:06:58.192 Test: blockdev copy ...passed 00:06:58.192 Suite: bdevio tests on: Nvme1n1 00:06:58.192 Test: blockdev write read block ...passed 00:06:58.192 Test: blockdev write zeroes read block ...passed 00:06:58.192 Test: blockdev write zeroes read no split ...passed 00:06:58.192 Test: blockdev write zeroes read split ...passed 00:06:58.193 Test: blockdev write zeroes read split partial ...passed 00:06:58.193 Test: blockdev reset ...[2024-11-19 11:38:11.380903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:58.193 passed 00:06:58.193 Test: blockdev write read 8 blocks ...[2024-11-19 11:38:11.384106] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.193 passed 00:06:58.193 Test: blockdev write read size > 128k ...passed 00:06:58.193 Test: blockdev write read invalid size ...passed 00:06:58.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.193 Test: blockdev write read max offset ...passed 00:06:58.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.193 Test: blockdev writev readv 8 blocks ...passed 00:06:58.193 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.193 Test: blockdev writev readv block ...passed 00:06:58.193 Test: blockdev writev readv size > 128k ...passed 00:06:58.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.193 Test: blockdev comparev and writev ...[2024-11-19 11:38:11.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c36000 len:0x1000 00:06:58.193 [2024-11-19 11:38:11.400614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.193 passed 00:06:58.193 Test: blockdev nvme passthru rw ...passed 00:06:58.193 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.193 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:11.403325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.193 [2024-11-19 11:38:11.403372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.193 passed 00:06:58.193 Test: blockdev copy ...passed 00:06:58.193 Suite: bdevio tests on: Nvme0n1 00:06:58.193 Test: blockdev write read block ...passed 00:06:58.193 Test: blockdev write zeroes read block ...passed 00:06:58.193 Test: blockdev write zeroes read no split ...passed 00:06:58.193 Test: blockdev write zeroes read split ...passed 00:06:58.193 Test: blockdev write zeroes read split partial ...passed 00:06:58.193 Test: blockdev reset ...[2024-11-19 11:38:11.436305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:58.193 [2024-11-19 11:38:11.438480] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:58.193 passed 00:06:58.193 Test: blockdev write read 8 blocks ...passed 00:06:58.193 Test: blockdev write read size > 128k ...passed 00:06:58.193 Test: blockdev write read invalid size ...passed 00:06:58.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.193 Test: blockdev write read max offset ...passed 00:06:58.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.193 Test: blockdev writev readv 8 blocks ...passed 00:06:58.193 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.193 Test: blockdev writev readv block ...passed 00:06:58.193 Test: blockdev writev readv size > 128k ...passed 00:06:58.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.193 Test: blockdev comparev and writev ...passed 00:06:58.193 Test: blockdev nvme passthru rw ...[2024-11-19 11:38:11.454549] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:58.193 separate metadata which is not supported yet. 00:06:58.193 passed 00:06:58.193 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.193 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:11.456040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:58.193 [2024-11-19 11:38:11.456100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:58.193 passed 00:06:58.193 Test: blockdev copy ...passed 00:06:58.193 00:06:58.193 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.193 suites 6 6 n/a 0 0 00:06:58.193 tests 138 138 138 0 0 00:06:58.193 asserts 893 893 893 0 n/a 00:06:58.193 00:06:58.193 Elapsed time = 0.731 seconds 00:06:58.193 0 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71888 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71888 ']' 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71888 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71888 00:06:58.193 killing process with pid 71888 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71888' 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71888 00:06:58.193 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71888 00:06:58.455 ************************************ 00:06:58.455 END TEST bdev_bounds 00:06:58.455 ************************************ 00:06:58.455 11:38:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:58.455 00:06:58.455 real 0m1.580s 00:06:58.455 user 0m3.921s 00:06:58.455 sys 0m0.286s 00:06:58.455 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.455 11:38:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:58.455 11:38:11 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:58.455 11:38:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:58.455 11:38:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.455 11:38:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.455 ************************************ 00:06:58.455 START TEST bdev_nbd 00:06:58.455 ************************************ 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:58.455 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71942 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:58.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71942 /var/tmp/spdk-nbd.sock 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71942 ']' 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:58.456 11:38:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:58.456 [2024-11-19 11:38:11.832396] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.456 [2024-11-19 11:38:11.832760] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.718 [2024-11-19 11:38:11.971396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.718 [2024-11-19 11:38:12.026548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.291 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.551 1+0 records in 00:06:59.551 1+0 records out 00:06:59.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116312 s, 3.5 MB/s 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.551 11:38:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.812 1+0 records in 00:06:59.812 1+0 records out 00:06:59.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933408 s, 4.4 MB/s 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:59.812 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.074 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.074 1+0 records in 00:07:00.074 1+0 records out 00:07:00.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997323 s, 4.1 MB/s 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:00.075 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.336 1+0 records in 00:07:00.336 1+0 records out 00:07:00.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105497 s, 3.9 MB/s 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:00.336 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.598 1+0 records in 00:07:00.598 1+0 records out 00:07:00.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096141 s, 4.3 MB/s 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:00.598 11:38:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:00.859 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:00.859 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:00.859 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.860 1+0 records in 00:07:00.860 1+0 records out 00:07:00.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148953 s, 2.7 MB/s 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:00.860 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd0", 00:07:01.121 "bdev_name": "Nvme0n1" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd1", 00:07:01.121 "bdev_name": "Nvme1n1" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd2", 00:07:01.121 "bdev_name": "Nvme2n1" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd3", 00:07:01.121 "bdev_name": "Nvme2n2" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd4", 00:07:01.121 "bdev_name": "Nvme2n3" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd5", 00:07:01.121 "bdev_name": "Nvme3n1" 00:07:01.121 } 00:07:01.121 ]' 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd0", 00:07:01.121 "bdev_name": "Nvme0n1" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd1", 00:07:01.121 "bdev_name": "Nvme1n1" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd2", 00:07:01.121 "bdev_name": "Nvme2n1" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd3", 00:07:01.121 "bdev_name": "Nvme2n2" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd4", 00:07:01.121 "bdev_name": "Nvme2n3" 00:07:01.121 }, 00:07:01.121 { 00:07:01.121 "nbd_device": "/dev/nbd5", 00:07:01.121 "bdev_name": "Nvme3n1" 00:07:01.121 } 00:07:01.121 ]' 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.121 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.382 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.383 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.383 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.383 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.644 11:38:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.644 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.904 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.166 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.427 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.688 11:38:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:02.688 /dev/nbd0 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.949 1+0 records in 00:07:02.949 1+0 records out 00:07:02.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540036 s, 7.6 MB/s 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:02.949 /dev/nbd1 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.949 1+0 records in 00:07:02.949 1+0 records out 00:07:02.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514566 s, 8.0 MB/s 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:02.949 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:03.208 /dev/nbd10 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.208 1+0 records in 00:07:03.208 1+0 records out 00:07:03.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302522 s, 13.5 MB/s 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.208 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:03.466 /dev/nbd11 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.466 1+0 records in 00:07:03.466 1+0 records out 00:07:03.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493321 s, 8.3 MB/s 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.466 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.467 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.467 11:38:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:03.725 /dev/nbd12 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.725 1+0 records in 00:07:03.725 1+0 records out 00:07:03.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465107 s, 8.8 MB/s 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.725 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:03.983 /dev/nbd13 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.983 1+0 records in 00:07:03.983 1+0 records out 00:07:03.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523741 s, 7.8 MB/s 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.983 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd0", 00:07:04.242 "bdev_name": "Nvme0n1" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd1", 00:07:04.242 "bdev_name": "Nvme1n1" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd10", 00:07:04.242 "bdev_name": "Nvme2n1" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd11", 00:07:04.242 "bdev_name": "Nvme2n2" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd12", 00:07:04.242 "bdev_name": "Nvme2n3" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd13", 00:07:04.242 "bdev_name": "Nvme3n1" 00:07:04.242 } 00:07:04.242 ]' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd0", 00:07:04.242 "bdev_name": "Nvme0n1" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd1", 00:07:04.242 "bdev_name": "Nvme1n1" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd10", 00:07:04.242 "bdev_name": "Nvme2n1" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd11", 00:07:04.242 "bdev_name": "Nvme2n2" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd12", 00:07:04.242 "bdev_name": "Nvme2n3" 00:07:04.242 }, 00:07:04.242 { 00:07:04.242 "nbd_device": "/dev/nbd13", 00:07:04.242 "bdev_name": "Nvme3n1" 00:07:04.242 } 00:07:04.242 ]' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.242 /dev/nbd1 00:07:04.242 /dev/nbd10 00:07:04.242 /dev/nbd11 00:07:04.242 /dev/nbd12 00:07:04.242 /dev/nbd13' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.242 /dev/nbd1 00:07:04.242 /dev/nbd10 00:07:04.242 /dev/nbd11 00:07:04.242 /dev/nbd12 00:07:04.242 /dev/nbd13' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:04.242 256+0 records in 00:07:04.242 256+0 records out 00:07:04.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00821262 s, 128 MB/s 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.242 256+0 records in 00:07:04.242 256+0 records out 00:07:04.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0656261 s, 16.0 MB/s 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.242 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.501 256+0 records in 00:07:04.501 256+0 records out 00:07:04.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0589746 s, 17.8 MB/s 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:04.501 256+0 records in 00:07:04.501 256+0 records out 00:07:04.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0613455 s, 17.1 MB/s 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:04.501 256+0 records in 00:07:04.501 256+0 records out 00:07:04.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0673006 s, 15.6 MB/s 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:04.501 256+0 records in 00:07:04.501 256+0 records out 00:07:04.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0705272 s, 14.9 MB/s 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.501 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:04.761 256+0 records in 00:07:04.761 256+0 records out 00:07:04.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0603725 s, 17.4 MB/s 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.761 11:38:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.761 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.761 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.761 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.761 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.761 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.761 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.022 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.284 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.545 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.835 11:38:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.835 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:06.109 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:06.368 malloc_lvol_verify 00:07:06.368 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:06.629 de1140be-ffea-421a-9fde-296040ebf54c 00:07:06.629 11:38:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:06.629 4a934c51-2549-419c-9149-b7790a4f89b2 00:07:06.629 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:06.887 /dev/nbd0 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:06.887 mke2fs 1.47.0 (5-Feb-2023) 00:07:06.887 Discarding device blocks: 0/4096 done 00:07:06.887 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:06.887 00:07:06.887 Allocating group tables: 0/1 done 00:07:06.887 Writing inode tables: 0/1 done 00:07:06.887 Creating journal (1024 blocks): done 00:07:06.887 Writing superblocks and filesystem accounting information: 0/1 done 00:07:06.887 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.887 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71942 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71942 ']' 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71942 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71942 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.146 killing process with pid 71942 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71942' 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71942 00:07:07.146 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71942 00:07:07.404 11:38:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:07.404 00:07:07.404 real 0m8.898s 00:07:07.404 user 0m13.050s 00:07:07.404 sys 0m3.018s 00:07:07.404 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.404 ************************************ 00:07:07.404 END TEST bdev_nbd 00:07:07.404 11:38:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:07.404 ************************************ 00:07:07.404 11:38:20 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:07.404 11:38:20 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:07.404 skipping fio tests on NVMe due to multi-ns failures. 00:07:07.404 11:38:20 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:07.404 11:38:20 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:07.404 11:38:20 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.404 11:38:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:07.404 11:38:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.404 11:38:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.404 ************************************ 00:07:07.404 START TEST bdev_verify 00:07:07.404 ************************************ 00:07:07.405 11:38:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:07.405 [2024-11-19 11:38:20.748076] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:07.405 [2024-11-19 11:38:20.748178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72303 ] 00:07:07.665 [2024-11-19 11:38:20.883688] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.665 [2024-11-19 11:38:20.917912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.665 [2024-11-19 11:38:20.917958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.927 Running I/O for 5 seconds... 00:07:10.236 20608.00 IOPS, 80.50 MiB/s [2024-11-19T11:38:24.581Z] 21344.00 IOPS, 83.38 MiB/s [2024-11-19T11:38:25.514Z] 21376.00 IOPS, 83.50 MiB/s [2024-11-19T11:38:26.447Z] 21280.00 IOPS, 83.12 MiB/s [2024-11-19T11:38:26.447Z] 21363.20 IOPS, 83.45 MiB/s 00:07:13.035 Latency(us) 00:07:13.035 [2024-11-19T11:38:26.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.035 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x0 length 0xbd0bd 00:07:13.035 Nvme0n1 : 5.05 1760.10 6.88 0.00 0.00 72397.45 9074.22 91145.45 00:07:13.035 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:13.035 Nvme0n1 : 5.05 1748.20 6.83 0.00 0.00 72993.04 13208.02 77030.01 00:07:13.035 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x0 length 0xa0000 00:07:13.035 Nvme1n1 : 5.07 1766.47 6.90 0.00 0.00 72132.86 13712.15 83482.78 00:07:13.035 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0xa0000 length 0xa0000 00:07:13.035 Nvme1n1 : 5.05 1747.61 6.83 0.00 0.00 72882.06 14922.04 69770.63 00:07:13.035 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x0 length 0x80000 00:07:13.035 Nvme2n1 : 5.07 1766.00 6.90 0.00 0.00 71989.48 14518.74 81466.29 00:07:13.035 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x80000 length 0x80000 00:07:13.035 Nvme2n1 : 5.06 1746.77 6.82 0.00 0.00 72793.39 16434.41 67350.84 00:07:13.035 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x0 length 0x80000 00:07:13.035 Nvme2n2 : 5.07 1765.54 6.90 0.00 0.00 71860.45 14821.22 78643.20 00:07:13.035 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x80000 length 0x80000 00:07:13.035 Nvme2n2 : 5.06 1746.26 6.82 0.00 0.00 72667.43 17543.48 66947.54 00:07:13.035 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.035 Verification LBA range: start 0x0 length 0x80000 00:07:13.035 Nvme2n3 : 5.08 1765.09 6.89 0.00 0.00 71716.50 13308.85 85902.57 00:07:13.035 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.036 Verification LBA range: start 0x80000 length 0x80000 00:07:13.036 Nvme2n3 : 5.07 1754.36 6.85 0.00 0.00 72192.99 5873.03 67754.14 00:07:13.036 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:13.036 Verification LBA range: start 0x0 length 0x20000 00:07:13.036 Nvme3n1 : 5.08 1764.62 6.89 0.00 0.00 71590.98 10737.82 93161.94 00:07:13.036 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:13.036 Verification LBA range: start 0x20000 length 0x20000 00:07:13.036 Nvme3n1 : 5.09 1761.25 6.88 0.00 0.00 71791.58 9124.63 69367.34 00:07:13.036 [2024-11-19T11:38:26.448Z] =================================================================================================================== 00:07:13.036 [2024-11-19T11:38:26.448Z] Total : 21092.26 82.39 0.00 0.00 72247.84 5873.03 93161.94 00:07:13.601 00:07:13.601 real 0m6.168s 00:07:13.601 user 0m11.652s 00:07:13.601 sys 0m0.195s 00:07:13.601 11:38:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.601 11:38:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 ************************************ 00:07:13.601 END TEST bdev_verify 00:07:13.601 ************************************ 00:07:13.601 11:38:26 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.601 11:38:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:13.601 11:38:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.601 11:38:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.601 ************************************ 00:07:13.601 START TEST bdev_verify_big_io 00:07:13.601 ************************************ 00:07:13.601 11:38:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.601 [2024-11-19 11:38:26.984614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:13.601 [2024-11-19 11:38:26.984716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72390 ] 00:07:13.858 [2024-11-19 11:38:27.119216] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.858 [2024-11-19 11:38:27.154060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.858 [2024-11-19 11:38:27.154202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.423 Running I/O for 5 seconds... 00:07:18.329 685.00 IOPS, 42.81 MiB/s [2024-11-19T11:38:33.691Z] 1454.00 IOPS, 90.88 MiB/s [2024-11-19T11:38:33.691Z] 2030.00 IOPS, 126.88 MiB/s [2024-11-19T11:38:33.691Z] 2138.00 IOPS, 133.62 MiB/s 00:07:20.279 Latency(us) 00:07:20.279 [2024-11-19T11:38:33.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.279 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x0 length 0xbd0b 00:07:20.279 Nvme0n1 : 5.93 113.97 7.12 0.00 0.00 1067010.35 17341.83 1122782.92 00:07:20.279 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:20.279 Nvme0n1 : 5.84 105.63 6.60 0.00 0.00 1161835.54 25811.10 1677721.60 00:07:20.279 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x0 length 0xa000 00:07:20.279 Nvme1n1 : 5.84 113.96 7.12 0.00 0.00 1036702.92 69367.34 922746.88 00:07:20.279 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0xa000 length 0xa000 00:07:20.279 Nvme1n1 : 5.84 106.27 6.64 0.00 0.00 1112404.64 43354.58 1703532.70 00:07:20.279 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x0 length 0x8000 00:07:20.279 Nvme2n1 : 5.93 118.65 7.42 0.00 0.00 976804.63 84289.38 903388.55 00:07:20.279 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x8000 length 0x8000 00:07:20.279 Nvme2n1 : 5.96 111.70 6.98 0.00 0.00 1030625.59 67754.14 1729343.80 00:07:20.279 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x0 length 0x8000 00:07:20.279 Nvme2n2 : 5.96 124.66 7.79 0.00 0.00 909000.74 19559.98 929199.66 00:07:20.279 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x8000 length 0x8000 00:07:20.279 Nvme2n2 : 5.98 115.36 7.21 0.00 0.00 964938.62 25508.63 1755154.90 00:07:20.279 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x0 length 0x8000 00:07:20.279 Nvme2n3 : 5.96 128.81 8.05 0.00 0.00 852660.28 5167.26 948557.98 00:07:20.279 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x8000 length 0x8000 00:07:20.279 Nvme2n3 : 5.98 125.68 7.86 0.00 0.00 855085.65 14317.10 1361535.61 00:07:20.279 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x0 length 0x2000 00:07:20.279 Nvme3n1 : 5.97 132.63 8.29 0.00 0.00 798529.04 4738.76 955010.76 00:07:20.279 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.279 Verification LBA range: start 0x2000 length 0x2000 00:07:20.279 Nvme3n1 : 6.06 168.26 10.52 0.00 0.00 620848.60 270.97 1071160.71 00:07:20.279 [2024-11-19T11:38:33.691Z] =================================================================================================================== 00:07:20.279 [2024-11-19T11:38:33.691Z] Total : 1465.57 91.60 0.00 0.00 929228.84 270.97 1755154.90 00:07:21.214 00:07:21.214 real 0m7.405s 00:07:21.214 user 0m14.091s 00:07:21.214 sys 0m0.222s 00:07:21.215 11:38:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.215 ************************************ 00:07:21.215 END TEST bdev_verify_big_io 00:07:21.215 ************************************ 00:07:21.215 11:38:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 11:38:34 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.215 11:38:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:21.215 11:38:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.215 11:38:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.215 ************************************ 00:07:21.215 START TEST bdev_write_zeroes 00:07:21.215 ************************************ 00:07:21.215 11:38:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:21.215 [2024-11-19 11:38:34.448997] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:21.215 [2024-11-19 11:38:34.449104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:07:21.215 [2024-11-19 11:38:34.584285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.215 [2024-11-19 11:38:34.618564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.782 Running I/O for 1 seconds... 00:07:22.715 53760.00 IOPS, 210.00 MiB/s 00:07:22.715 Latency(us) 00:07:22.715 [2024-11-19T11:38:36.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.715 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.715 Nvme0n1 : 1.02 8983.24 35.09 0.00 0.00 14210.64 6402.36 23290.49 00:07:22.715 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.715 Nvme1n1 : 1.02 8972.81 35.05 0.00 0.00 14210.45 10435.35 22080.59 00:07:22.715 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.715 Nvme2n1 : 1.02 8962.43 35.01 0.00 0.00 14183.08 10536.17 20568.22 00:07:22.715 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.715 Nvme2n2 : 1.02 8998.51 35.15 0.00 0.00 14079.57 7007.31 20467.40 00:07:22.715 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.715 Nvme2n3 : 1.03 8988.33 35.11 0.00 0.00 14059.90 6452.78 21072.34 00:07:22.715 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:22.715 Nvme3n1 : 1.02 8946.64 34.95 0.00 0.00 14091.13 6225.92 22383.06 00:07:22.715 [2024-11-19T11:38:36.127Z] =================================================================================================================== 00:07:22.715 [2024-11-19T11:38:36.127Z] Total : 53851.95 210.36 0.00 0.00 14138.97 6225.92 23290.49 00:07:22.973 00:07:22.973 real 0m1.829s 00:07:22.973 user 0m1.562s 00:07:22.973 sys 0m0.154s 00:07:22.973 11:38:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.973 ************************************ 00:07:22.973 END TEST bdev_write_zeroes 00:07:22.973 ************************************ 00:07:22.973 11:38:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:22.973 11:38:36 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.973 11:38:36 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:22.973 11:38:36 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.973 11:38:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.973 ************************************ 00:07:22.973 START TEST bdev_json_nonenclosed 00:07:22.973 ************************************ 00:07:22.973 11:38:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.973 [2024-11-19 11:38:36.342513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.973 [2024-11-19 11:38:36.342620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72530 ] 00:07:23.231 [2024-11-19 11:38:36.477766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.231 [2024-11-19 11:38:36.511380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.231 [2024-11-19 11:38:36.511478] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:23.231 [2024-11-19 11:38:36.511494] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:23.231 [2024-11-19 11:38:36.511505] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.231 00:07:23.231 real 0m0.306s 00:07:23.231 user 0m0.118s 00:07:23.231 sys 0m0.085s 00:07:23.231 11:38:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.231 ************************************ 00:07:23.231 END TEST bdev_json_nonenclosed 00:07:23.231 ************************************ 00:07:23.231 11:38:36 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:23.231 11:38:36 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.231 11:38:36 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:23.231 11:38:36 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.231 11:38:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.488 ************************************ 00:07:23.488 START TEST bdev_json_nonarray 00:07:23.489 ************************************ 00:07:23.489 11:38:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:23.489 [2024-11-19 11:38:36.709391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.489 [2024-11-19 11:38:36.709778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72555 ] 00:07:23.489 [2024-11-19 11:38:36.847268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.489 [2024-11-19 11:38:36.881330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.489 [2024-11-19 11:38:36.881439] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:23.489 [2024-11-19 11:38:36.881459] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:23.489 [2024-11-19 11:38:36.881473] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.747 00:07:23.747 real 0m0.313s 00:07:23.747 user 0m0.130s 00:07:23.747 sys 0m0.079s 00:07:23.747 11:38:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.747 ************************************ 00:07:23.747 END TEST bdev_json_nonarray 00:07:23.747 ************************************ 00:07:23.747 11:38:36 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:23.747 11:38:37 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:23.747 00:07:23.747 real 0m29.752s 00:07:23.747 user 0m47.138s 00:07:23.747 sys 0m4.927s 00:07:23.747 11:38:37 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.747 ************************************ 00:07:23.747 END TEST blockdev_nvme 00:07:23.747 ************************************ 00:07:23.747 11:38:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.747 11:38:37 -- spdk/autotest.sh@209 -- # uname -s 00:07:23.747 11:38:37 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:23.747 11:38:37 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:23.747 11:38:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:23.747 11:38:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.747 11:38:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.747 ************************************ 00:07:23.747 START TEST blockdev_nvme_gpt 00:07:23.747 ************************************ 00:07:23.747 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:23.747 * Looking for test storage... 00:07:24.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.006 11:38:37 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.006 --rc genhtml_branch_coverage=1 00:07:24.006 --rc genhtml_function_coverage=1 00:07:24.006 --rc genhtml_legend=1 00:07:24.006 --rc geninfo_all_blocks=1 00:07:24.006 --rc geninfo_unexecuted_blocks=1 00:07:24.006 00:07:24.006 ' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.006 --rc genhtml_branch_coverage=1 00:07:24.006 --rc genhtml_function_coverage=1 00:07:24.006 --rc genhtml_legend=1 00:07:24.006 --rc geninfo_all_blocks=1 00:07:24.006 --rc geninfo_unexecuted_blocks=1 00:07:24.006 00:07:24.006 ' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.006 --rc genhtml_branch_coverage=1 00:07:24.006 --rc genhtml_function_coverage=1 00:07:24.006 --rc genhtml_legend=1 00:07:24.006 --rc geninfo_all_blocks=1 00:07:24.006 --rc geninfo_unexecuted_blocks=1 00:07:24.006 00:07:24.006 ' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.006 --rc genhtml_branch_coverage=1 00:07:24.006 --rc genhtml_function_coverage=1 00:07:24.006 --rc genhtml_legend=1 00:07:24.006 --rc geninfo_all_blocks=1 00:07:24.006 --rc geninfo_unexecuted_blocks=1 00:07:24.006 00:07:24.006 ' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72633 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72633 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 72633 ']' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.006 11:38:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.006 11:38:37 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:24.006 [2024-11-19 11:38:37.308617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.007 [2024-11-19 11:38:37.308736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72633 ] 00:07:24.265 [2024-11-19 11:38:37.447007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.265 [2024-11-19 11:38:37.482142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.831 11:38:38 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.831 11:38:38 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:24.831 11:38:38 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:24.831 11:38:38 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:24.831 11:38:38 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:25.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.347 Waiting for block devices as requested 00:07:25.347 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.347 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.347 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.606 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.875 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:30.875 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:30.876 BYT; 00:07:30.876 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:30.876 BYT; 00:07:30.876 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:30.876 11:38:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:30.876 11:38:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:30.876 11:38:44 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:30.876 11:38:44 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:30.876 11:38:44 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:30.876 11:38:44 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:30.876 11:38:44 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:31.808 The operation has completed successfully. 00:07:31.808 11:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:32.741 The operation has completed successfully. 00:07:32.741 11:38:46 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:33.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:33.563 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:33.563 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:33.563 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:33.820 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:33.820 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:33.820 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.820 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.820 [] 00:07:33.820 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.820 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:33.820 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:33.820 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:33.820 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:33.820 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:33.820 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.820 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.079 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:34.079 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:34.080 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "59f03597-38fe-4f9c-a6bc-bc6a50efd4ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "59f03597-38fe-4f9c-a6bc-bc6a50efd4ff",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4f854dac-95fc-4f11-9982-dd63daee4ee4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f854dac-95fc-4f11-9982-dd63daee4ee4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "04cdd644-ea3b-4852-b65c-082ee364c833"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "04cdd644-ea3b-4852-b65c-082ee364c833",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f31b6b84-cbc8-442c-9e85-082f4e2fee06"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f31b6b84-cbc8-442c-9e85-082f4e2fee06",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c66e437e-3c99-4a2b-87b9-7c902b1eb3e9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c66e437e-3c99-4a2b-87b9-7c902b1eb3e9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:34.338 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:34.338 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:34.338 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:34.338 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72633 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 72633 ']' 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 72633 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72633 00:07:34.338 killing process with pid 72633 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72633' 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 72633 00:07:34.338 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 72633 00:07:34.596 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:34.596 11:38:47 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:34.596 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:34.596 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.596 11:38:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.596 ************************************ 00:07:34.596 START TEST bdev_hello_world 00:07:34.596 ************************************ 00:07:34.596 11:38:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:34.596 [2024-11-19 11:38:47.877314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.596 [2024-11-19 11:38:47.877446] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73239 ] 00:07:34.854 [2024-11-19 11:38:48.006214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.854 [2024-11-19 11:38:48.039901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.113 [2024-11-19 11:38:48.410912] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:35.113 [2024-11-19 11:38:48.410966] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:35.113 [2024-11-19 11:38:48.410987] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:35.113 [2024-11-19 11:38:48.413052] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:35.113 [2024-11-19 11:38:48.413795] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:35.113 [2024-11-19 11:38:48.413820] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:35.113 [2024-11-19 11:38:48.414192] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:35.113 00:07:35.113 [2024-11-19 11:38:48.414215] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:35.371 ************************************ 00:07:35.371 END TEST bdev_hello_world 00:07:35.371 ************************************ 00:07:35.371 00:07:35.371 real 0m0.771s 00:07:35.371 user 0m0.521s 00:07:35.371 sys 0m0.147s 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:35.371 11:38:48 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:35.371 11:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.371 11:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.371 11:38:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.371 ************************************ 00:07:35.371 START TEST bdev_bounds 00:07:35.371 ************************************ 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73270 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:35.371 Process bdevio pid: 73270 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73270' 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73270 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73270 ']' 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.371 11:38:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:35.371 [2024-11-19 11:38:48.697208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.371 [2024-11-19 11:38:48.697344] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73270 ] 00:07:35.629 [2024-11-19 11:38:48.832555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.629 [2024-11-19 11:38:48.872601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.629 [2024-11-19 11:38:48.872630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.629 [2024-11-19 11:38:48.872651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.253 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.253 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:36.253 11:38:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:36.253 I/O targets: 00:07:36.253 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:36.253 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:36.253 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:36.253 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:36.253 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:36.253 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:36.253 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:36.253 00:07:36.253 00:07:36.253 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.253 http://cunit.sourceforge.net/ 00:07:36.253 00:07:36.253 00:07:36.253 Suite: bdevio tests on: Nvme3n1 00:07:36.253 Test: blockdev write read block ...passed 00:07:36.253 Test: blockdev write zeroes read block ...passed 00:07:36.253 Test: blockdev write zeroes read no split ...passed 00:07:36.511 Test: blockdev write zeroes read split ...passed 00:07:36.511 Test: blockdev write zeroes read split partial ...passed 00:07:36.511 Test: blockdev reset ...[2024-11-19 11:38:49.680945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:36.511 [2024-11-19 11:38:49.684090] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.511 passed 00:07:36.511 Test: blockdev write read 8 blocks ...passed 00:07:36.511 Test: blockdev write read size > 128k ...passed 00:07:36.511 Test: blockdev write read invalid size ...passed 00:07:36.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.511 Test: blockdev write read max offset ...passed 00:07:36.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.511 Test: blockdev writev readv 8 blocks ...passed 00:07:36.511 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.511 Test: blockdev writev readv block ...passed 00:07:36.511 Test: blockdev writev readv size > 128k ...passed 00:07:36.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.511 Test: blockdev comparev and writev ...[2024-11-19 11:38:49.697281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4c0a000 len:0x1000 00:07:36.511 [2024-11-19 11:38:49.697363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev nvme passthru rw ...passed 00:07:36.511 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:38:49.699687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:36.511 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:36.511 [2024-11-19 11:38:49.699853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev copy ...passed 00:07:36.511 Suite: bdevio tests on: Nvme2n3 00:07:36.511 Test: blockdev write read block ...passed 00:07:36.511 Test: blockdev write zeroes read block ...passed 00:07:36.511 Test: blockdev write zeroes read no split ...passed 00:07:36.511 Test: blockdev write zeroes read split ...passed 00:07:36.511 Test: blockdev write zeroes read split partial ...passed 00:07:36.511 Test: blockdev reset ...[2024-11-19 11:38:49.715559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:36.511 [2024-11-19 11:38:49.717491] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.511 passed 00:07:36.511 Test: blockdev write read 8 blocks ...passed 00:07:36.511 Test: blockdev write read size > 128k ...passed 00:07:36.511 Test: blockdev write read invalid size ...passed 00:07:36.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.511 Test: blockdev write read max offset ...passed 00:07:36.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.511 Test: blockdev writev readv 8 blocks ...passed 00:07:36.511 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.511 Test: blockdev writev readv block ...passed 00:07:36.511 Test: blockdev writev readv size > 128k ...passed 00:07:36.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.511 Test: blockdev comparev and writev ...[2024-11-19 11:38:49.726715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:36.511 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b0c04000 len:0x1000 00:07:36.511 [2024-11-19 11:38:49.726846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev nvme passthru vendor specific ...[2024-11-19 11:38:49.728393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:36.511 [2024-11-19 11:38:49.728438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev nvme admin passthru ...passed 00:07:36.511 Test: blockdev copy ...passed 00:07:36.511 Suite: bdevio tests on: Nvme2n2 00:07:36.511 Test: blockdev write read block ...passed 00:07:36.511 Test: blockdev write zeroes read block ...passed 00:07:36.511 Test: blockdev write zeroes read no split ...passed 00:07:36.511 Test: blockdev write zeroes read split ...passed 00:07:36.511 Test: blockdev write zeroes read split partial ...passed 00:07:36.511 Test: blockdev reset ...[2024-11-19 11:38:49.753351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:36.511 [2024-11-19 11:38:49.756129] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.511 passed 00:07:36.511 Test: blockdev write read 8 blocks ...passed 00:07:36.511 Test: blockdev write read size > 128k ...passed 00:07:36.511 Test: blockdev write read invalid size ...passed 00:07:36.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.511 Test: blockdev write read max offset ...passed 00:07:36.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.511 Test: blockdev writev readv 8 blocks ...passed 00:07:36.511 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.511 Test: blockdev writev readv block ...passed 00:07:36.511 Test: blockdev writev readv size > 128k ...passed 00:07:36.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.511 Test: blockdev comparev and writev ...[2024-11-19 11:38:49.771739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0c04000 len:0x1000 00:07:36.511 [2024-11-19 11:38:49.771854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev nvme passthru rw ...passed 00:07:36.511 Test: blockdev nvme passthru vendor specific ...passed 00:07:36.511 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:49.772800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:36.511 [2024-11-19 11:38:49.772861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev copy ...passed 00:07:36.511 Suite: bdevio tests on: Nvme2n1 00:07:36.511 Test: blockdev write read block ...passed 00:07:36.511 Test: blockdev write zeroes read block ...passed 00:07:36.511 Test: blockdev write zeroes read no split ...passed 00:07:36.511 Test: blockdev write zeroes read split ...passed 00:07:36.511 Test: blockdev write zeroes read split partial ...passed 00:07:36.511 Test: blockdev reset ...[2024-11-19 11:38:49.787057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:36.511 passed 00:07:36.511 Test: blockdev write read 8 blocks ...[2024-11-19 11:38:49.788844] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.511 passed 00:07:36.511 Test: blockdev write read size > 128k ...passed 00:07:36.511 Test: blockdev write read invalid size ...passed 00:07:36.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.511 Test: blockdev write read max offset ...passed 00:07:36.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.511 Test: blockdev writev readv 8 blocks ...passed 00:07:36.511 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.511 Test: blockdev writev readv block ...passed 00:07:36.511 Test: blockdev writev readv size > 128k ...passed 00:07:36.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.511 Test: blockdev comparev and writev ...[2024-11-19 11:38:49.792707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0c06000 len:0x1000 00:07:36.511 [2024-11-19 11:38:49.792754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev nvme passthru rw ...passed 00:07:36.511 Test: blockdev nvme passthru vendor specific ...passed 00:07:36.511 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:49.793123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:36.511 [2024-11-19 11:38:49.793150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:36.511 passed 00:07:36.511 Test: blockdev copy ...passed 00:07:36.511 Suite: bdevio tests on: Nvme1n1p2 00:07:36.511 Test: blockdev write read block ...passed 00:07:36.511 Test: blockdev write zeroes read block ...passed 00:07:36.511 Test: blockdev write zeroes read no split ...passed 00:07:36.511 Test: blockdev write zeroes read split ...passed 00:07:36.511 Test: blockdev write zeroes read split partial ...passed 00:07:36.512 Test: blockdev reset ...[2024-11-19 11:38:49.809844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:36.512 [2024-11-19 11:38:49.811475] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.512 passed 00:07:36.512 Test: blockdev write read 8 blocks ...passed 00:07:36.512 Test: blockdev write read size > 128k ...passed 00:07:36.512 Test: blockdev write read invalid size ...passed 00:07:36.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.512 Test: blockdev write read max offset ...passed 00:07:36.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.512 Test: blockdev writev readv 8 blocks ...passed 00:07:36.512 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.512 Test: blockdev writev readv block ...passed 00:07:36.512 Test: blockdev writev readv size > 128k ...passed 00:07:36.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.512 Test: blockdev comparev and writev ...[2024-11-19 11:38:49.825863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2b0c02000 len:0x1000 00:07:36.512 [2024-11-19 11:38:49.826000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.512 passed 00:07:36.512 Test: blockdev nvme passthru rw ...passed 00:07:36.512 Test: blockdev nvme passthru vendor specific ...passed 00:07:36.512 Test: blockdev nvme admin passthru ...passed 00:07:36.512 Test: blockdev copy ...passed 00:07:36.512 Suite: bdevio tests on: Nvme1n1p1 00:07:36.512 Test: blockdev write read block ...passed 00:07:36.512 Test: blockdev write zeroes read block ...passed 00:07:36.512 Test: blockdev write zeroes read no split ...passed 00:07:36.512 Test: blockdev write zeroes read split ...passed 00:07:36.512 Test: blockdev write zeroes read split partial ...passed 00:07:36.512 Test: blockdev reset ...[2024-11-19 11:38:49.843325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:36.512 [2024-11-19 11:38:49.844869] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.512 passed 00:07:36.512 Test: blockdev write read 8 blocks ...passed 00:07:36.512 Test: blockdev write read size > 128k ...passed 00:07:36.512 Test: blockdev write read invalid size ...passed 00:07:36.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.512 Test: blockdev write read max offset ...passed 00:07:36.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.512 Test: blockdev writev readv 8 blocks ...passed 00:07:36.512 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.512 Test: blockdev writev readv block ...passed 00:07:36.512 Test: blockdev writev readv size > 128k ...passed 00:07:36.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.512 Test: blockdev comparev and writev ...[2024-11-19 11:38:49.856985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c863b000 len:0x1000 00:07:36.512 [2024-11-19 11:38:49.857104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:36.512 passed 00:07:36.512 Test: blockdev nvme passthru rw ...passed 00:07:36.512 Test: blockdev nvme passthru vendor specific ...passed 00:07:36.512 Test: blockdev nvme admin passthru ...passed 00:07:36.512 Test: blockdev copy ...passed 00:07:36.512 Suite: bdevio tests on: Nvme0n1 00:07:36.512 Test: blockdev write read block ...passed 00:07:36.512 Test: blockdev write zeroes read block ...passed 00:07:36.512 Test: blockdev write zeroes read no split ...passed 00:07:36.512 Test: blockdev write zeroes read split ...passed 00:07:36.512 Test: blockdev write zeroes read split partial ...passed 00:07:36.512 Test: blockdev reset ...[2024-11-19 11:38:49.876682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:36.512 passed 00:07:36.512 Test: blockdev write read 8 blocks ...[2024-11-19 11:38:49.881185] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:36.512 passed 00:07:36.512 Test: blockdev write read size > 128k ...passed 00:07:36.512 Test: blockdev write read invalid size ...passed 00:07:36.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:36.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:36.512 Test: blockdev write read max offset ...passed 00:07:36.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:36.512 Test: blockdev writev readv 8 blocks ...passed 00:07:36.512 Test: blockdev writev readv 30 x 1block ...passed 00:07:36.512 Test: blockdev writev readv block ...passed 00:07:36.512 Test: blockdev writev readv size > 128k ...passed 00:07:36.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:36.512 Test: blockdev comparev and writev ...passed 00:07:36.512 Test: blockdev nvme passthru rw ...[2024-11-19 11:38:49.892520] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:36.512 separate metadata which is not supported yet. 00:07:36.512 passed 00:07:36.512 Test: blockdev nvme passthru vendor specific ...passed 00:07:36.512 Test: blockdev nvme admin passthru ...[2024-11-19 11:38:49.893249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:36.512 [2024-11-19 11:38:49.893298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:36.512 passed 00:07:36.512 Test: blockdev copy ...passed 00:07:36.512 00:07:36.512 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.512 suites 7 7 n/a 0 0 00:07:36.512 tests 161 161 161 0 0 00:07:36.512 asserts 1025 1025 1025 0 n/a 00:07:36.512 00:07:36.512 Elapsed time = 0.533 seconds 00:07:36.512 0 00:07:36.512 11:38:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73270 00:07:36.512 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73270 ']' 00:07:36.512 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73270 00:07:36.512 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:36.512 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.512 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73270 00:07:36.770 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.770 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.770 killing process with pid 73270 00:07:36.770 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73270' 00:07:36.770 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73270 00:07:36.770 11:38:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73270 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:36.770 00:07:36.770 real 0m1.454s 00:07:36.770 user 0m3.677s 00:07:36.770 sys 0m0.275s 00:07:36.770 ************************************ 00:07:36.770 END TEST bdev_bounds 00:07:36.770 ************************************ 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:36.770 11:38:50 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:36.770 11:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:36.770 11:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.770 11:38:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.770 ************************************ 00:07:36.770 START TEST bdev_nbd 00:07:36.770 ************************************ 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73318 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73318 /var/tmp/spdk-nbd.sock 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73318 ']' 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.770 11:38:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:37.028 [2024-11-19 11:38:50.220087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.028 [2024-11-19 11:38:50.220361] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.028 [2024-11-19 11:38:50.355378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.028 [2024-11-19 11:38:50.391428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.962 1+0 records in 00:07:37.962 1+0 records out 00:07:37.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798309 s, 5.1 MB/s 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:37.962 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.236 1+0 records in 00:07:38.236 1+0 records out 00:07:38.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118027 s, 3.5 MB/s 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:38.236 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.494 1+0 records in 00:07:38.494 1+0 records out 00:07:38.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130557 s, 3.1 MB/s 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:38.494 11:38:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:38.752 1+0 records in 00:07:38.752 1+0 records out 00:07:38.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132212 s, 3.1 MB/s 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.752 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:38.753 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:38.753 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:38.753 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:38.753 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:38.753 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:38.753 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.011 1+0 records in 00:07:39.011 1+0 records out 00:07:39.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000926788 s, 4.4 MB/s 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:39.011 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.270 1+0 records in 00:07:39.270 1+0 records out 00:07:39.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837637 s, 4.9 MB/s 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:39.270 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.528 1+0 records in 00:07:39.528 1+0 records out 00:07:39.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000779186 s, 5.3 MB/s 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:39.528 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd0", 00:07:39.529 "bdev_name": "Nvme0n1" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd1", 00:07:39.529 "bdev_name": "Nvme1n1p1" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd2", 00:07:39.529 "bdev_name": "Nvme1n1p2" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd3", 00:07:39.529 "bdev_name": "Nvme2n1" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd4", 00:07:39.529 "bdev_name": "Nvme2n2" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd5", 00:07:39.529 "bdev_name": "Nvme2n3" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd6", 00:07:39.529 "bdev_name": "Nvme3n1" 00:07:39.529 } 00:07:39.529 ]' 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd0", 00:07:39.529 "bdev_name": "Nvme0n1" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd1", 00:07:39.529 "bdev_name": "Nvme1n1p1" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd2", 00:07:39.529 "bdev_name": "Nvme1n1p2" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd3", 00:07:39.529 "bdev_name": "Nvme2n1" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd4", 00:07:39.529 "bdev_name": "Nvme2n2" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd5", 00:07:39.529 "bdev_name": "Nvme2n3" 00:07:39.529 }, 00:07:39.529 { 00:07:39.529 "nbd_device": "/dev/nbd6", 00:07:39.529 "bdev_name": "Nvme3n1" 00:07:39.529 } 00:07:39.529 ]' 00:07:39.529 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.787 11:38:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.787 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:40.045 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:40.045 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.046 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.304 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.562 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.821 11:38:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.821 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.079 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:41.337 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:41.338 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:41.596 /dev/nbd0 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.596 1+0 records in 00:07:41.596 1+0 records out 00:07:41.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0008185 s, 5.0 MB/s 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:41.596 11:38:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:41.854 /dev/nbd1 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.854 1+0 records in 00:07:41.854 1+0 records out 00:07:41.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682546 s, 6.0 MB/s 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:41.854 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:42.112 /dev/nbd10 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.112 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.113 1+0 records in 00:07:42.113 1+0 records out 00:07:42.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776254 s, 5.3 MB/s 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:42.113 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:42.371 /dev/nbd11 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.371 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.372 1+0 records in 00:07:42.372 1+0 records out 00:07:42.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747293 s, 5.5 MB/s 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:42.372 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:42.632 /dev/nbd12 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.632 1+0 records in 00:07:42.632 1+0 records out 00:07:42.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930711 s, 4.4 MB/s 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:42.632 11:38:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:42.632 /dev/nbd13 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.632 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.892 1+0 records in 00:07:42.892 1+0 records out 00:07:42.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070706 s, 5.8 MB/s 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:42.892 /dev/nbd14 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.892 1+0 records in 00:07:42.892 1+0 records out 00:07:42.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110171 s, 3.7 MB/s 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.892 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:43.150 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:43.150 { 00:07:43.150 "nbd_device": "/dev/nbd0", 00:07:43.151 "bdev_name": "Nvme0n1" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd1", 00:07:43.151 "bdev_name": "Nvme1n1p1" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd10", 00:07:43.151 "bdev_name": "Nvme1n1p2" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd11", 00:07:43.151 "bdev_name": "Nvme2n1" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd12", 00:07:43.151 "bdev_name": "Nvme2n2" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd13", 00:07:43.151 "bdev_name": "Nvme2n3" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd14", 00:07:43.151 "bdev_name": "Nvme3n1" 00:07:43.151 } 00:07:43.151 ]' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd0", 00:07:43.151 "bdev_name": "Nvme0n1" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd1", 00:07:43.151 "bdev_name": "Nvme1n1p1" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd10", 00:07:43.151 "bdev_name": "Nvme1n1p2" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd11", 00:07:43.151 "bdev_name": "Nvme2n1" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd12", 00:07:43.151 "bdev_name": "Nvme2n2" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd13", 00:07:43.151 "bdev_name": "Nvme2n3" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "nbd_device": "/dev/nbd14", 00:07:43.151 "bdev_name": "Nvme3n1" 00:07:43.151 } 00:07:43.151 ]' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:43.151 /dev/nbd1 00:07:43.151 /dev/nbd10 00:07:43.151 /dev/nbd11 00:07:43.151 /dev/nbd12 00:07:43.151 /dev/nbd13 00:07:43.151 /dev/nbd14' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:43.151 /dev/nbd1 00:07:43.151 /dev/nbd10 00:07:43.151 /dev/nbd11 00:07:43.151 /dev/nbd12 00:07:43.151 /dev/nbd13 00:07:43.151 /dev/nbd14' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:43.151 256+0 records in 00:07:43.151 256+0 records out 00:07:43.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00947736 s, 111 MB/s 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.151 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:43.409 256+0 records in 00:07:43.409 256+0 records out 00:07:43.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182319 s, 5.8 MB/s 00:07:43.409 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.409 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:43.667 256+0 records in 00:07:43.667 256+0 records out 00:07:43.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225992 s, 4.6 MB/s 00:07:43.667 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.667 11:38:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:43.925 256+0 records in 00:07:43.925 256+0 records out 00:07:43.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191821 s, 5.5 MB/s 00:07:43.925 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:43.925 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:44.184 256+0 records in 00:07:44.184 256+0 records out 00:07:44.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.226372 s, 4.6 MB/s 00:07:44.184 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.184 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:44.441 256+0 records in 00:07:44.441 256+0 records out 00:07:44.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227268 s, 4.6 MB/s 00:07:44.441 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.441 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:44.441 256+0 records in 00:07:44.441 256+0 records out 00:07:44.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224681 s, 4.7 MB/s 00:07:44.441 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.441 11:38:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:44.700 256+0 records in 00:07:44.700 256+0 records out 00:07:44.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223619 s, 4.7 MB/s 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.700 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.959 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.218 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.476 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.733 11:38:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.994 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.253 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:46.512 11:38:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:46.770 malloc_lvol_verify 00:07:46.770 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:47.029 48116c29-133f-41af-a520-d5c9ad25b6c5 00:07:47.029 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:47.029 29970f4a-f74d-463e-b992-6afab8f40ff6 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:47.287 /dev/nbd0 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:47.287 mke2fs 1.47.0 (5-Feb-2023) 00:07:47.287 Discarding device blocks: 0/4096 done 00:07:47.287 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:47.287 00:07:47.287 Allocating group tables: 0/1 done 00:07:47.287 Writing inode tables: 0/1 done 00:07:47.287 Creating journal (1024 blocks): done 00:07:47.287 Writing superblocks and filesystem accounting information: 0/1 done 00:07:47.287 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.287 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73318 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73318 ']' 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73318 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73318 00:07:47.546 killing process with pid 73318 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73318' 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73318 00:07:47.546 11:39:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73318 00:07:47.804 ************************************ 00:07:47.804 END TEST bdev_nbd 00:07:47.804 ************************************ 00:07:47.804 11:39:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:47.804 00:07:47.804 real 0m10.938s 00:07:47.804 user 0m15.245s 00:07:47.804 sys 0m3.717s 00:07:47.804 11:39:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.804 11:39:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:47.804 11:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:47.804 11:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:47.804 skipping fio tests on NVMe due to multi-ns failures. 00:07:47.804 11:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:47.804 11:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:47.804 11:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:47.804 11:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:47.804 11:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:47.804 11:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.804 11:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.804 ************************************ 00:07:47.804 START TEST bdev_verify 00:07:47.804 ************************************ 00:07:47.804 11:39:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:48.062 [2024-11-19 11:39:01.215191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.062 [2024-11-19 11:39:01.215301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73739 ] 00:07:48.062 [2024-11-19 11:39:01.352119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.062 [2024-11-19 11:39:01.387312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.062 [2024-11-19 11:39:01.387464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.651 Running I/O for 5 seconds... 00:07:50.958 21120.00 IOPS, 82.50 MiB/s [2024-11-19T11:39:05.303Z] 21696.00 IOPS, 84.75 MiB/s [2024-11-19T11:39:06.236Z] 22208.00 IOPS, 86.75 MiB/s [2024-11-19T11:39:07.170Z] 21728.00 IOPS, 84.88 MiB/s [2024-11-19T11:39:07.170Z] 21196.80 IOPS, 82.80 MiB/s 00:07:53.758 Latency(us) 00:07:53.758 [2024-11-19T11:39:07.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.758 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x0 length 0xbd0bd 00:07:53.758 Nvme0n1 : 5.07 1515.14 5.92 0.00 0.00 84164.12 15426.17 82676.18 00:07:53.758 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:53.758 Nvme0n1 : 5.07 1464.94 5.72 0.00 0.00 86408.66 17946.78 77030.01 00:07:53.758 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x0 length 0x4ff80 00:07:53.758 Nvme1n1p1 : 5.07 1514.71 5.92 0.00 0.00 83925.00 17543.48 75820.11 00:07:53.758 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:53.758 Nvme1n1p1 : 5.08 1475.29 5.76 0.00 0.00 85682.78 2571.03 82272.89 00:07:53.758 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x0 length 0x4ff7f 00:07:53.758 Nvme1n1p2 : 5.07 1514.23 5.91 0.00 0.00 83773.56 18955.03 74610.22 00:07:53.758 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:53.758 Nvme1n1p2 : 5.06 1466.93 5.73 0.00 0.00 87014.50 15526.99 80659.69 00:07:53.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x0 length 0x80000 00:07:53.758 Nvme2n1 : 5.09 1522.83 5.95 0.00 0.00 83186.92 6604.01 71383.83 00:07:53.758 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x80000 length 0x80000 00:07:53.758 Nvme2n1 : 5.06 1466.53 5.73 0.00 0.00 86938.12 18450.90 77433.30 00:07:53.758 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.758 Verification LBA range: start 0x0 length 0x80000 00:07:53.759 Nvme2n2 : 5.09 1522.43 5.95 0.00 0.00 83037.99 7108.14 70577.23 00:07:53.759 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.759 Verification LBA range: start 0x80000 length 0x80000 00:07:53.759 Nvme2n2 : 5.06 1466.13 5.73 0.00 0.00 86810.01 19862.45 74610.22 00:07:53.759 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.759 Verification LBA range: start 0x0 length 0x80000 00:07:53.759 Nvme2n3 : 5.09 1522.02 5.95 0.00 0.00 82914.61 7360.20 74206.92 00:07:53.759 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.759 Verification LBA range: start 0x80000 length 0x80000 00:07:53.759 Nvme2n3 : 5.07 1465.73 5.73 0.00 0.00 86680.13 20971.52 72593.72 00:07:53.759 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:53.759 Verification LBA range: start 0x0 length 0x20000 00:07:53.759 Nvme3n1 : 5.09 1521.61 5.94 0.00 0.00 82861.43 6906.49 77030.01 00:07:53.759 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:53.759 Verification LBA range: start 0x20000 length 0x20000 00:07:53.759 Nvme3n1 : 5.07 1465.33 5.72 0.00 0.00 86547.60 19862.45 74610.22 00:07:53.759 [2024-11-19T11:39:07.171Z] =================================================================================================================== 00:07:53.759 [2024-11-19T11:39:07.171Z] Total : 20903.84 81.66 0.00 0.00 84964.88 2571.03 82676.18 00:07:54.324 00:07:54.324 real 0m6.341s 00:07:54.324 user 0m12.009s 00:07:54.324 sys 0m0.173s 00:07:54.324 11:39:07 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.324 ************************************ 00:07:54.324 END TEST bdev_verify 00:07:54.324 ************************************ 00:07:54.324 11:39:07 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:54.324 11:39:07 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:54.324 11:39:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:54.324 11:39:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.324 11:39:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.324 ************************************ 00:07:54.325 START TEST bdev_verify_big_io 00:07:54.325 ************************************ 00:07:54.325 11:39:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:54.325 [2024-11-19 11:39:07.622772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.325 [2024-11-19 11:39:07.622878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73826 ] 00:07:54.583 [2024-11-19 11:39:07.755837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.583 [2024-11-19 11:39:07.789445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.583 [2024-11-19 11:39:07.789530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.841 Running I/O for 5 seconds... 00:08:00.675 1374.00 IOPS, 85.88 MiB/s [2024-11-19T11:39:14.345Z] 2670.50 IOPS, 166.91 MiB/s [2024-11-19T11:39:14.345Z] 3034.00 IOPS, 189.63 MiB/s 00:08:00.933 Latency(us) 00:08:00.933 [2024-11-19T11:39:14.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.933 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.933 Verification LBA range: start 0x0 length 0xbd0b 00:08:00.933 Nvme0n1 : 5.87 107.43 6.71 0.00 0.00 1137971.32 32062.23 1471232.79 00:08:00.933 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.933 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:00.933 Nvme0n1 : 5.88 108.01 6.75 0.00 0.00 1101949.46 25306.98 1077613.49 00:08:00.933 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x0 length 0x4ff8 00:08:00.934 Nvme1n1p1 : 5.88 103.11 6.44 0.00 0.00 1142877.30 112116.97 1477685.56 00:08:00.934 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:00.934 Nvme1n1p1 : 5.88 113.87 7.12 0.00 0.00 1045798.66 85902.57 1051802.39 00:08:00.934 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x0 length 0x4ff7 00:08:00.934 Nvme1n1p2 : 5.95 104.47 6.53 0.00 0.00 1107738.98 73400.32 1768060.46 00:08:00.934 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:00.934 Nvme1n1p2 : 5.89 110.60 6.91 0.00 0.00 1040715.09 140347.86 1200216.22 00:08:00.934 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x0 length 0x8000 00:08:00.934 Nvme2n1 : 6.01 108.54 6.78 0.00 0.00 1040925.39 49807.36 1793871.56 00:08:00.934 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x8000 length 0x8000 00:08:00.934 Nvme2n1 : 5.96 117.33 7.33 0.00 0.00 962483.61 68560.74 974369.08 00:08:00.934 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x0 length 0x8000 00:08:00.934 Nvme2n2 : 6.01 109.17 6.82 0.00 0.00 1002460.73 50613.96 1832588.21 00:08:00.934 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x8000 length 0x8000 00:08:00.934 Nvme2n2 : 6.00 123.13 7.70 0.00 0.00 899408.10 35691.91 1019538.51 00:08:00.934 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x0 length 0x8000 00:08:00.934 Nvme2n3 : 6.05 118.92 7.43 0.00 0.00 896099.10 15325.34 1871304.86 00:08:00.934 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x8000 length 0x8000 00:08:00.934 Nvme2n3 : 6.01 128.30 8.02 0.00 0.00 846866.89 4587.52 1032444.06 00:08:00.934 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x0 length 0x2000 00:08:00.934 Nvme3n1 : 6.10 156.93 9.81 0.00 0.00 666259.57 319.80 1393799.48 00:08:00.934 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:00.934 Verification LBA range: start 0x2000 length 0x2000 00:08:00.934 Nvme3n1 : 6.01 131.72 8.23 0.00 0.00 806351.96 6553.60 1038896.84 00:08:00.934 [2024-11-19T11:39:14.346Z] =================================================================================================================== 00:08:00.934 [2024-11-19T11:39:14.346Z] Total : 1641.53 102.60 0.00 0.00 961683.40 319.80 1871304.86 00:08:01.868 00:08:01.868 real 0m7.498s 00:08:01.868 user 0m14.305s 00:08:01.868 sys 0m0.208s 00:08:01.868 11:39:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.868 ************************************ 00:08:01.868 END TEST bdev_verify_big_io 00:08:01.868 ************************************ 00:08:01.868 11:39:15 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:01.868 11:39:15 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:01.868 11:39:15 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:01.868 11:39:15 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.868 11:39:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.868 ************************************ 00:08:01.868 START TEST bdev_write_zeroes 00:08:01.868 ************************************ 00:08:01.868 11:39:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:01.868 [2024-11-19 11:39:15.181727] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:01.868 [2024-11-19 11:39:15.181837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73936 ] 00:08:02.126 [2024-11-19 11:39:15.317076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.126 [2024-11-19 11:39:15.346129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.384 Running I/O for 1 seconds... 00:08:03.759 64064.00 IOPS, 250.25 MiB/s 00:08:03.759 Latency(us) 00:08:03.759 [2024-11-19T11:39:17.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.759 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme0n1 : 1.03 9060.43 35.39 0.00 0.00 14095.71 10132.87 28634.19 00:08:03.759 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme1n1p1 : 1.03 9049.45 35.35 0.00 0.00 14096.26 10586.58 28835.84 00:08:03.759 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme1n1p2 : 1.03 9038.42 35.31 0.00 0.00 14089.31 10637.00 27625.94 00:08:03.759 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme2n1 : 1.03 9028.34 35.27 0.00 0.00 14050.97 9527.93 27021.00 00:08:03.759 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme2n2 : 1.04 9018.29 35.23 0.00 0.00 14037.16 8620.50 26214.40 00:08:03.759 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme2n3 : 1.04 9008.15 35.19 0.00 0.00 14029.74 8217.21 27021.00 00:08:03.759 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:03.759 Nvme3n1 : 1.04 8998.15 35.15 0.00 0.00 14024.86 7713.08 28634.19 00:08:03.759 [2024-11-19T11:39:17.171Z] =================================================================================================================== 00:08:03.759 [2024-11-19T11:39:17.171Z] Total : 63201.22 246.88 0.00 0.00 14060.57 7713.08 28835.84 00:08:03.759 00:08:03.759 real 0m1.815s 00:08:03.759 user 0m1.556s 00:08:03.759 sys 0m0.150s 00:08:03.759 ************************************ 00:08:03.759 END TEST bdev_write_zeroes 00:08:03.759 11:39:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.759 11:39:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:03.759 ************************************ 00:08:03.759 11:39:16 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.759 11:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:03.759 11:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.759 11:39:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.759 ************************************ 00:08:03.759 START TEST bdev_json_nonenclosed 00:08:03.759 ************************************ 00:08:03.759 11:39:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:03.759 [2024-11-19 11:39:17.062920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:03.759 [2024-11-19 11:39:17.063028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73967 ] 00:08:04.016 [2024-11-19 11:39:17.199490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.016 [2024-11-19 11:39:17.232058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.016 [2024-11-19 11:39:17.232149] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:04.016 [2024-11-19 11:39:17.232164] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:04.016 [2024-11-19 11:39:17.232177] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.016 00:08:04.016 real 0m0.307s 00:08:04.016 user 0m0.120s 00:08:04.016 sys 0m0.083s 00:08:04.016 11:39:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.016 ************************************ 00:08:04.016 END TEST bdev_json_nonenclosed 00:08:04.016 ************************************ 00:08:04.016 11:39:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 11:39:17 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:04.016 11:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:04.016 11:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.016 11:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:04.016 ************************************ 00:08:04.016 START TEST bdev_json_nonarray 00:08:04.016 ************************************ 00:08:04.016 11:39:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:04.274 [2024-11-19 11:39:17.440145] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:04.274 [2024-11-19 11:39:17.440254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73998 ] 00:08:04.274 [2024-11-19 11:39:17.574476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.274 [2024-11-19 11:39:17.607840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.274 [2024-11-19 11:39:17.607940] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:04.274 [2024-11-19 11:39:17.607955] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:04.274 [2024-11-19 11:39:17.607966] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.533 00:08:04.533 real 0m0.308s 00:08:04.533 user 0m0.125s 00:08:04.533 sys 0m0.079s 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:04.533 ************************************ 00:08:04.533 END TEST bdev_json_nonarray 00:08:04.533 ************************************ 00:08:04.533 11:39:17 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:04.533 11:39:17 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:04.533 11:39:17 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:04.533 11:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.533 11:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.533 11:39:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:04.533 ************************************ 00:08:04.533 START TEST bdev_gpt_uuid 00:08:04.533 ************************************ 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74018 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74018 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:04.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74018 ']' 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.533 11:39:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:04.533 [2024-11-19 11:39:17.817930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:04.533 [2024-11-19 11:39:17.818044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74018 ] 00:08:04.792 [2024-11-19 11:39:17.949311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.792 [2024-11-19 11:39:17.982791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.358 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.359 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:05.359 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:05.359 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.359 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:05.617 Some configs were skipped because the RPC state that can call them passed over. 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:05.617 11:39:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.617 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:05.617 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.617 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:05.617 { 00:08:05.617 "name": "Nvme1n1p1", 00:08:05.617 "aliases": [ 00:08:05.617 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:05.617 ], 00:08:05.617 "product_name": "GPT Disk", 00:08:05.617 "block_size": 4096, 00:08:05.617 "num_blocks": 655104, 00:08:05.617 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:05.617 "assigned_rate_limits": { 00:08:05.617 "rw_ios_per_sec": 0, 00:08:05.617 "rw_mbytes_per_sec": 0, 00:08:05.617 "r_mbytes_per_sec": 0, 00:08:05.617 "w_mbytes_per_sec": 0 00:08:05.617 }, 00:08:05.617 "claimed": false, 00:08:05.617 "zoned": false, 00:08:05.617 "supported_io_types": { 00:08:05.617 "read": true, 00:08:05.617 "write": true, 00:08:05.617 "unmap": true, 00:08:05.617 "flush": true, 00:08:05.617 "reset": true, 00:08:05.617 "nvme_admin": false, 00:08:05.617 "nvme_io": false, 00:08:05.617 "nvme_io_md": false, 00:08:05.617 "write_zeroes": true, 00:08:05.617 "zcopy": false, 00:08:05.617 "get_zone_info": false, 00:08:05.617 "zone_management": false, 00:08:05.617 "zone_append": false, 00:08:05.617 "compare": true, 00:08:05.617 "compare_and_write": false, 00:08:05.617 "abort": true, 00:08:05.617 "seek_hole": false, 00:08:05.617 "seek_data": false, 00:08:05.617 "copy": true, 00:08:05.617 "nvme_iov_md": false 00:08:05.617 }, 00:08:05.617 "driver_specific": { 00:08:05.617 "gpt": { 00:08:05.617 "base_bdev": "Nvme1n1", 00:08:05.617 "offset_blocks": 256, 00:08:05.617 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:05.617 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:05.617 "partition_name": "SPDK_TEST_first" 00:08:05.617 } 00:08:05.617 } 00:08:05.617 } 00:08:05.617 ]' 00:08:05.617 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:05.876 { 00:08:05.876 "name": "Nvme1n1p2", 00:08:05.876 "aliases": [ 00:08:05.876 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:05.876 ], 00:08:05.876 "product_name": "GPT Disk", 00:08:05.876 "block_size": 4096, 00:08:05.876 "num_blocks": 655103, 00:08:05.876 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:05.876 "assigned_rate_limits": { 00:08:05.876 "rw_ios_per_sec": 0, 00:08:05.876 "rw_mbytes_per_sec": 0, 00:08:05.876 "r_mbytes_per_sec": 0, 00:08:05.876 "w_mbytes_per_sec": 0 00:08:05.876 }, 00:08:05.876 "claimed": false, 00:08:05.876 "zoned": false, 00:08:05.876 "supported_io_types": { 00:08:05.876 "read": true, 00:08:05.876 "write": true, 00:08:05.876 "unmap": true, 00:08:05.876 "flush": true, 00:08:05.876 "reset": true, 00:08:05.876 "nvme_admin": false, 00:08:05.876 "nvme_io": false, 00:08:05.876 "nvme_io_md": false, 00:08:05.876 "write_zeroes": true, 00:08:05.876 "zcopy": false, 00:08:05.876 "get_zone_info": false, 00:08:05.876 "zone_management": false, 00:08:05.876 "zone_append": false, 00:08:05.876 "compare": true, 00:08:05.876 "compare_and_write": false, 00:08:05.876 "abort": true, 00:08:05.876 "seek_hole": false, 00:08:05.876 "seek_data": false, 00:08:05.876 "copy": true, 00:08:05.876 "nvme_iov_md": false 00:08:05.876 }, 00:08:05.876 "driver_specific": { 00:08:05.876 "gpt": { 00:08:05.876 "base_bdev": "Nvme1n1", 00:08:05.876 "offset_blocks": 655360, 00:08:05.876 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:05.876 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:05.876 "partition_name": "SPDK_TEST_second" 00:08:05.876 } 00:08:05.876 } 00:08:05.876 } 00:08:05.876 ]' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74018 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74018 ']' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74018 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74018 00:08:05.876 killing process with pid 74018 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74018' 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74018 00:08:05.876 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74018 00:08:06.133 00:08:06.133 real 0m1.780s 00:08:06.133 user 0m1.984s 00:08:06.133 sys 0m0.325s 00:08:06.133 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.133 ************************************ 00:08:06.133 END TEST bdev_gpt_uuid 00:08:06.133 ************************************ 00:08:06.133 11:39:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:06.391 11:39:19 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:06.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:06.648 Waiting for block devices as requested 00:08:06.648 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.906 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.906 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.906 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.176 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:12.176 11:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:12.176 11:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:12.434 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:12.434 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:12.434 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:12.434 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:12.434 11:39:25 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:12.434 00:08:12.434 real 0m48.560s 00:08:12.434 user 1m1.247s 00:08:12.434 sys 0m7.536s 00:08:12.434 11:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.434 ************************************ 00:08:12.434 END TEST blockdev_nvme_gpt 00:08:12.434 ************************************ 00:08:12.434 11:39:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.434 11:39:25 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:12.434 11:39:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.434 11:39:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.435 11:39:25 -- common/autotest_common.sh@10 -- # set +x 00:08:12.435 ************************************ 00:08:12.435 START TEST nvme 00:08:12.435 ************************************ 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:12.435 * Looking for test storage... 00:08:12.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.435 11:39:25 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.435 11:39:25 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.435 11:39:25 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.435 11:39:25 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.435 11:39:25 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.435 11:39:25 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:12.435 11:39:25 nvme -- scripts/common.sh@345 -- # : 1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.435 11:39:25 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.435 11:39:25 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@353 -- # local d=1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.435 11:39:25 nvme -- scripts/common.sh@355 -- # echo 1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.435 11:39:25 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@353 -- # local d=2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.435 11:39:25 nvme -- scripts/common.sh@355 -- # echo 2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.435 11:39:25 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.435 11:39:25 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.435 11:39:25 nvme -- scripts/common.sh@368 -- # return 0 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.435 --rc genhtml_branch_coverage=1 00:08:12.435 --rc genhtml_function_coverage=1 00:08:12.435 --rc genhtml_legend=1 00:08:12.435 --rc geninfo_all_blocks=1 00:08:12.435 --rc geninfo_unexecuted_blocks=1 00:08:12.435 00:08:12.435 ' 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.435 --rc genhtml_branch_coverage=1 00:08:12.435 --rc genhtml_function_coverage=1 00:08:12.435 --rc genhtml_legend=1 00:08:12.435 --rc geninfo_all_blocks=1 00:08:12.435 --rc geninfo_unexecuted_blocks=1 00:08:12.435 00:08:12.435 ' 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.435 --rc genhtml_branch_coverage=1 00:08:12.435 --rc genhtml_function_coverage=1 00:08:12.435 --rc genhtml_legend=1 00:08:12.435 --rc geninfo_all_blocks=1 00:08:12.435 --rc geninfo_unexecuted_blocks=1 00:08:12.435 00:08:12.435 ' 00:08:12.435 11:39:25 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:12.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.435 --rc genhtml_branch_coverage=1 00:08:12.435 --rc genhtml_function_coverage=1 00:08:12.435 --rc genhtml_legend=1 00:08:12.435 --rc geninfo_all_blocks=1 00:08:12.435 --rc geninfo_unexecuted_blocks=1 00:08:12.435 00:08:12.435 ' 00:08:12.435 11:39:25 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:13.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:13.632 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.632 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.632 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.632 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:13.632 11:39:26 nvme -- nvme/nvme.sh@79 -- # uname 00:08:13.632 11:39:26 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:13.632 11:39:26 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:13.632 11:39:26 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1071 -- # stubpid=74642 00:08:13.632 Waiting for stub to ready for secondary processes... 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/74642 ]] 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:13.632 11:39:26 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:13.632 [2024-11-19 11:39:26.932116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:13.632 [2024-11-19 11:39:26.932236] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:14.564 [2024-11-19 11:39:27.652927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:14.564 [2024-11-19 11:39:27.672945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.564 [2024-11-19 11:39:27.673252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.564 [2024-11-19 11:39:27.673251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.564 [2024-11-19 11:39:27.683242] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:14.564 [2024-11-19 11:39:27.683278] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.564 [2024-11-19 11:39:27.694598] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:14.564 [2024-11-19 11:39:27.694923] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:14.564 [2024-11-19 11:39:27.696067] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.564 [2024-11-19 11:39:27.696346] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:14.564 [2024-11-19 11:39:27.696462] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:14.564 [2024-11-19 11:39:27.697755] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.564 [2024-11-19 11:39:27.698066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:14.564 [2024-11-19 11:39:27.698136] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:14.564 [2024-11-19 11:39:27.700680] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:14.564 [2024-11-19 11:39:27.701015] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:14.564 [2024-11-19 11:39:27.701257] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:14.564 [2024-11-19 11:39:27.701346] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:14.564 [2024-11-19 11:39:27.701438] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:14.564 done. 00:08:14.564 11:39:27 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:14.564 11:39:27 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:14.564 11:39:27 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:14.564 11:39:27 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:14.564 11:39:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.564 11:39:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.564 ************************************ 00:08:14.564 START TEST nvme_reset 00:08:14.564 ************************************ 00:08:14.564 11:39:27 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:14.822 Initializing NVMe Controllers 00:08:14.822 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:14.822 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:14.822 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:14.822 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:14.822 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:14.822 00:08:14.822 real 0m0.156s 00:08:14.822 user 0m0.049s 00:08:14.822 sys 0m0.072s 00:08:14.822 11:39:28 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.822 ************************************ 00:08:14.822 END TEST nvme_reset 00:08:14.822 ************************************ 00:08:14.822 11:39:28 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 11:39:28 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:14.822 11:39:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.822 11:39:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.822 11:39:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.822 ************************************ 00:08:14.822 START TEST nvme_identify 00:08:14.822 ************************************ 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:14.822 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:14.822 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:14.822 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:14.822 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:14.822 11:39:28 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:14.822 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:15.082 ===================================================== 00:08:15.082 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.082 ===================================================== 00:08:15.082 Controller Capabilities/Features 00:08:15.082 ================================ 00:08:15.082 Vendor ID: 1b36 00:08:15.082 Subsystem Vendor ID: 1af4 00:08:15.082 Serial Number: 12340 00:08:15.082 Model Number: QEMU NVMe Ctrl 00:08:15.082 Firmware Version: 8.0.0 00:08:15.082 Recommended Arb Burst: 6 00:08:15.082 IEEE OUI Identifier: 00 54 52 00:08:15.082 Multi-path I/O 00:08:15.082 May have multiple subsystem ports: No 00:08:15.082 May have multiple controllers: No 00:08:15.082 Associated with SR-IOV VF: No 00:08:15.082 Max Data Transfer Size: 524288 00:08:15.082 Max Number of Namespaces: 256 00:08:15.082 Max Number of I/O Queues: 64 00:08:15.082 NVMe Specification Version (VS): 1.4 00:08:15.082 NVMe Specification Version (Identify): 1.4 00:08:15.082 Maximum Queue Entries: 2048 00:08:15.082 Contiguous Queues Required: Yes 00:08:15.082 Arbitration Mechanisms Supported 00:08:15.082 Weighted Round Robin: Not Supported 00:08:15.082 Vendor Specific: Not Supported 00:08:15.082 Reset Timeout: 7500 ms 00:08:15.082 Doorbell Stride: 4 bytes 00:08:15.082 NVM Subsystem Reset: Not Supported 00:08:15.082 Command Sets Supported 00:08:15.082 NVM Command Set: Supported 00:08:15.082 Boot Partition: Not Supported 00:08:15.082 Memory Page Size Minimum: 4096 bytes 00:08:15.082 Memory Page Size Maximum: 65536 bytes 00:08:15.082 Persistent Memory Region: Not Supported 00:08:15.082 Optional Asynchronous Events Supported 00:08:15.082 Namespace Attribute Notices: Supported 00:08:15.082 Firmware Activation Notices: Not Supported 00:08:15.082 ANA Change Notices: Not Supported 00:08:15.082 PLE Aggregate Log Change Notices: Not Supported 00:08:15.082 LBA Status Info Alert Notices: Not Supported 00:08:15.082 EGE Aggregate Log Change Notices: Not Supported 00:08:15.082 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.082 Zone Descriptor Change Notices: Not Supported 00:08:15.082 Discovery Log Change Notices: Not Supported 00:08:15.082 Controller Attributes 00:08:15.082 128-bit Host Identifier: Not Supported 00:08:15.082 Non-Operational Permissive Mode: Not Supported 00:08:15.082 NVM Sets: Not Supported 00:08:15.082 Read Recovery Levels: Not Supported 00:08:15.082 Endurance Groups: Not Supported 00:08:15.082 Predictable Latency Mode: Not Supported 00:08:15.082 Traffic Based Keep ALive: Not Supported 00:08:15.082 Namespace Granularity: Not Supported 00:08:15.082 SQ Associations: Not Supported 00:08:15.082 UUID List: Not Supported 00:08:15.082 Multi-Domain Subsystem: Not Supported 00:08:15.082 Fixed Capacity Management: Not Supported 00:08:15.082 Variable Capacity Management: Not Supported 00:08:15.082 Delete Endurance Group: Not Supported 00:08:15.083 Delete NVM Set: Not Supported 00:08:15.083 Extended LBA Formats Supported: Supported 00:08:15.083 Flexible Data Placement Supported: Not Supported 00:08:15.083 00:08:15.083 Controller Memory Buffer Support 00:08:15.083 ================================ 00:08:15.083 Supported: No 00:08:15.083 00:08:15.083 Persistent Memory Region Support 00:08:15.083 ================================ 00:08:15.083 Supported: No 00:08:15.083 00:08:15.083 Admin Command Set Attributes 00:08:15.083 ============================ 00:08:15.083 Security Send/Receive: Not Supported 00:08:15.083 Format NVM: Supported 00:08:15.083 Firmware Activate/Download: Not Supported 00:08:15.083 Namespace Management: Supported 00:08:15.083 Device Self-Test: Not Supported 00:08:15.083 Directives: Supported 00:08:15.083 NVMe-MI: Not Supported 00:08:15.083 Virtualization Management: Not Supported 00:08:15.083 Doorbell Buffer Config: Supported 00:08:15.083 Get LBA Status Capability: Not Supported 00:08:15.083 Command & Feature Lockdown Capability: Not Supported 00:08:15.083 Abort Command Limit: 4 00:08:15.083 Async Event Request Limit: 4 00:08:15.083 Number of Firmware Slots: N/A 00:08:15.083 Firmware Slot 1 Read-Only: N/A 00:08:15.083 Firmware Activation Without Reset: N/A 00:08:15.083 Multiple Update Detection Support: N/A 00:08:15.083 Firmware Update Granularity: No Information Provided 00:08:15.083 Per-Namespace SMART Log: Yes 00:08:15.083 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.083 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:15.083 Command Effects Log Page: Supported 00:08:15.083 Get Log Page Extended Data: Supported 00:08:15.083 Telemetry Log Pages: Not Supported 00:08:15.083 Persistent Event Log Pages: Not Supported 00:08:15.083 Supported Log Pages Log Page: May Support 00:08:15.083 Commands Supported & Effects Log Page: Not Supported 00:08:15.083 Feature Identifiers & Effects Log Page:May Support 00:08:15.083 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.083 Data Area 4 for Telemetry Log: Not Supported 00:08:15.083 Error Log Page Entries Supported: 1 00:08:15.083 Keep Alive: Not Supported 00:08:15.083 00:08:15.083 NVM Command Set Attributes 00:08:15.083 ========================== 00:08:15.083 Submission Queue Entry Size 00:08:15.083 Max: 64 00:08:15.083 Min: 64 00:08:15.083 Completion Queue Entry Size 00:08:15.083 Max: 16 00:08:15.083 Min: 16 00:08:15.083 Number of Namespaces: 256 00:08:15.083 Compare Command: Supported 00:08:15.083 Write Uncorrectable Command: Not Supported 00:08:15.083 Dataset Management Command: Supported 00:08:15.083 Write Zeroes Command: Supported 00:08:15.083 Set Features Save Field: Supported 00:08:15.083 Reservations: Not Supported 00:08:15.083 Timestamp: Supported 00:08:15.083 Copy: Supported 00:08:15.083 Volatile Write Cache: Present 00:08:15.083 Atomic Write Unit (Normal): 1 00:08:15.083 Atomic Write Unit (PFail): 1 00:08:15.083 Atomic Compare & Write Unit: 1 00:08:15.083 Fused Compare & Write: Not Supported 00:08:15.083 Scatter-Gather List 00:08:15.083 SGL Command Set: Supported 00:08:15.083 SGL Keyed: Not Supported 00:08:15.083 SGL Bit Bucket Descriptor: Not Supported 00:08:15.083 SGL Metadata Pointer: Not Supported 00:08:15.083 Oversized SGL: Not Supported 00:08:15.083 SGL Metadata Address: Not Supported 00:08:15.083 SGL Offset: Not Supported 00:08:15.083 Transport SGL Data Block: Not Supported 00:08:15.083 Replay Protected Memory Block: Not Supported 00:08:15.083 00:08:15.083 Firmware Slot Information 00:08:15.083 ========================= 00:08:15.083 Active slot: 1 00:08:15.083 Slot 1 Firmware Revision: 1.0 00:08:15.083 00:08:15.083 00:08:15.083 Commands Supported and Effects 00:08:15.083 ============================== 00:08:15.083 Admin Commands 00:08:15.083 -------------- 00:08:15.083 Delete I/O Submission Queue (00h): Supported 00:08:15.083 Create I/O Submission Queue (01h): Supported 00:08:15.083 Get Log Page (02h): Supported 00:08:15.083 Delete I/O Completion Queue (04h): Supported 00:08:15.083 Create I/O Completion Queue (05h): Supported 00:08:15.083 Identify (06h): Supported 00:08:15.083 Abort (08h): Supported 00:08:15.083 Set Features (09h): Supported 00:08:15.083 Get Features (0Ah): Supported 00:08:15.083 Asynchronous Event Request (0Ch): Supported 00:08:15.083 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.083 Directive Send (19h): Supported 00:08:15.083 Directive Receive (1Ah): Supported 00:08:15.083 Virtualization Management (1Ch): Supported 00:08:15.083 Doorbell Buffer Config (7Ch): Supported 00:08:15.083 Format NVM (80h): Supported LBA-Change 00:08:15.083 I/O Commands 00:08:15.083 ------------ 00:08:15.083 Flush (00h): Supported LBA-Change 00:08:15.083 Write (01h): Supported LBA-Change 00:08:15.083 Read (02h): Supported 00:08:15.083 Compare (05h): Supported 00:08:15.083 Write Zeroes (08h): Supported LBA-Change 00:08:15.083 Dataset Management (09h): Supported LBA-Change 00:08:15.083 Unknown (0Ch): Supported 00:08:15.083 Unknown (12h): Supported 00:08:15.083 Copy (19h): Supported LBA-Change 00:08:15.083 Unknown (1Dh): Supported LBA-Change 00:08:15.083 00:08:15.083 Error Log 00:08:15.083 ========= 00:08:15.083 00:08:15.083 Arbitration 00:08:15.083 =========== 00:08:15.083 Arbitration Burst: no limit 00:08:15.083 00:08:15.083 Power Management 00:08:15.083 ================ 00:08:15.083 Number of Power States: 1 00:08:15.083 Current Power State: Power State #0 00:08:15.083 Power State #0: 00:08:15.083 Max Power: 25.00 W 00:08:15.083 Non-Operational State: Operational 00:08:15.083 Entry Latency: 16 microseconds 00:08:15.083 Exit Latency: 4 microseconds 00:08:15.083 Relative Read Throughput: 0 00:08:15.083 Relative Read Latency: 0 00:08:15.083 Relative Write Throughput: 0 00:08:15.083 Relative Write Latency: 0 00:08:15.083 Idle Power: Not Reported 00:08:15.083 Active Power: Not Reported 00:08:15.083 Non-Operational Permissive Mode: Not Supported 00:08:15.083 00:08:15.083 Health Information 00:08:15.083 ================== 00:08:15.083 Critical Warnings: 00:08:15.083 Available Spare Space: OK 00:08:15.083 Temperature: OK 00:08:15.083 Device Reliability: OK 00:08:15.083 Read Only: No 00:08:15.083 Volatile Memory Backup: OK 00:08:15.083 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.083 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.083 Available Spare: 0% 00:08:15.083 Available Spare Threshold: 0% 00:08:15.083 Life Percentage Used: 0% 00:08:15.083 Data Units Read: 671 00:08:15.083 Data Units Written: 599 00:08:15.083 Host Read Commands: 36403 00:08:15.083 Host Write Commands: 36189 00:08:15.083 Controller Busy Time: 0 minutes 00:08:15.083 Power Cycles: 0 00:08:15.083 Power On Hours: 0 hours 00:08:15.083 Unsafe Shutdowns: 0 00:08:15.083 Unrecoverable Media Errors: 0 00:08:15.083 Lifetime Error Log Entries: 0 00:08:15.083 Warning Temperature Time: 0 minutes 00:08:15.083 Critical Temperature Time: 0 minutes 00:08:15.083 00:08:15.083 Number of Queues 00:08:15.083 ================ 00:08:15.083 Number of I/O Submission Queues: 64 00:08:15.083 Number of I/O Completion Queues: 64 00:08:15.083 00:08:15.083 ZNS Specific Controller Data 00:08:15.083 ============================ 00:08:15.083 Zone Append Size Limit: 0 00:08:15.083 00:08:15.083 00:08:15.083 Active Namespaces 00:08:15.083 ================= 00:08:15.083 Namespace ID:1 00:08:15.083 Error Recovery Timeout: Unlimited 00:08:15.083 Command Set Identifier: NVM (00h) 00:08:15.083 Deallocate: Supported 00:08:15.083 Deallocated/Unwritten Error: Supported 00:08:15.083 Deallocated Read Value: All 0x00 00:08:15.083 Deallocate in Write Zeroes: Not Supported 00:08:15.083 Deallocated Guard Field: 0xFFFF 00:08:15.083 Flush: Supported 00:08:15.083 Reservation: Not Supported 00:08:15.083 Metadata Transferred as: Separate Metadata Buffer 00:08:15.083 Namespace Sharing Capabilities: Private 00:08:15.083 Size (in LBAs): 1548666 (5GiB) 00:08:15.083 Capacity (in LBAs): 1548666 (5GiB) 00:08:15.083 Utilization (in LBAs): 1548666 (5GiB) 00:08:15.083 Thin Provisioning: Not Supported 00:08:15.083 Per-NS Atomic Units: No 00:08:15.084 Maximum Single Source Range Length: 128 00:08:15.084 Maximum Copy Length: 128 00:08:15.084 Maximum Source Range Count: 128 00:08:15.084 NGUID/EUI64 Never Reused: No 00:08:15.084 Namespace Write Protected: No 00:08:15.084 Number of LBA Formats: 8 00:08:15.084 Current LBA Format: LBA Format #07 00:08:15.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.084 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.084 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.084 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.084 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.084 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.084 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.084 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.084 00:08:15.084 NVM Specific Namespace Data 00:08:15.084 =========================== 00:08:15.084 Logical Block Storage Tag Mask: 0 00:08:15.084 Protection Information Capabilities: 00:08:15.084 16b Guard Protection Information Storage Tag Support: No 00:08:15.084 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.084 Storage Tag Check Read Support: No 00:08:15.084 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.084 ===================================================== 00:08:15.084 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.084 ===================================================== 00:08:15.084 Controller Capabilities/Features 00:08:15.084 ================================ 00:08:15.084 Vendor ID: 1b36 00:08:15.084 Subsystem Vendor ID: 1af4 00:08:15.084 Serial Number: 12341 00:08:15.084 Model Number: QEMU NVMe Ctrl 00:08:15.084 Firmware Version: 8.0.0 00:08:15.084 Recommended Arb Burst: 6 00:08:15.084 IEEE OUI Identifier: 00 54 52 00:08:15.084 Multi-path I/O 00:08:15.084 May have multiple subsystem ports: No 00:08:15.084 May have multiple controllers: No 00:08:15.084 Associated with SR-IOV VF: No 00:08:15.084 Max Data Transfer Size: 524288 00:08:15.084 Max Number of Namespaces: 256 00:08:15.084 Max Number of I/O Queues: 64 00:08:15.084 NVMe Specification Version (VS): 1.4 00:08:15.084 NVMe Specification Version (Identify): 1.4 00:08:15.084 Maximum Queue Entries: 2048 00:08:15.084 Contiguous Queues Required: Yes 00:08:15.084 Arbitration Mechanisms Supported 00:08:15.084 Weighted Round Robin: Not Supported 00:08:15.084 Vendor Specific: Not Supported 00:08:15.084 Reset Timeout: 7500 ms 00:08:15.084 Doorbell Stride: 4 bytes 00:08:15.084 NVM Subsystem Reset: Not Supported 00:08:15.084 Command Sets Supported 00:08:15.084 NVM Command Set: Supported 00:08:15.084 Boot Partition: Not Supported 00:08:15.084 Memory Page Size Minimum: 4096 bytes 00:08:15.084 Memory Page Size Maximum: 65536 bytes 00:08:15.084 Persistent Memory Region: Not Supported 00:08:15.084 Optional Asynchronous Events Supported 00:08:15.084 Namespace Attribute Notices: Supported 00:08:15.084 Firmware Activation Notices: Not Supported 00:08:15.084 ANA Change Notices: Not Supported 00:08:15.084 PLE Aggregate Log Change Notices: Not Supported 00:08:15.084 LBA Status Info Alert Notices: Not Supported 00:08:15.084 EGE Aggregate Log Change Notices: Not Supported 00:08:15.084 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.084 Zone Descriptor Change Notices: Not Supported 00:08:15.084 Discovery Log Change Notices: Not Supported 00:08:15.084 Controller Attributes 00:08:15.084 128-bit Host Identifier: Not Supported 00:08:15.084 Non-Operational Permissive Mode: Not Supported 00:08:15.084 NVM Sets: Not Supported 00:08:15.084 Read Recovery Levels: Not Supported 00:08:15.084 Endurance Groups: Not Supported 00:08:15.084 Predictable Latency Mode: Not Supported 00:08:15.084 Traffic Based Keep ALive: Not Supported 00:08:15.084 Namespace Granularity: Not Supported 00:08:15.084 SQ Associations: Not Supported 00:08:15.084 UUID List: Not Supported 00:08:15.084 Multi-Domain Subsystem: Not Supported 00:08:15.084 Fixed Capacity Management: Not Supported 00:08:15.084 Variable Capacity Management: Not Supported 00:08:15.084 Delete Endurance Group: Not Supported 00:08:15.084 Delete NVM Set: Not Supported 00:08:15.084 Extended LBA Formats Supported: Supported 00:08:15.084 Flexible Data Placement Supported: Not Supported 00:08:15.084 00:08:15.084 Controller Memory Buffer Support 00:08:15.084 ================================ 00:08:15.084 Supported: No 00:08:15.084 00:08:15.084 Persistent Memory Region Support 00:08:15.084 ================================ 00:08:15.084 Supported: No 00:08:15.084 00:08:15.084 Admin Command Set Attributes 00:08:15.084 ============================ 00:08:15.084 Security Send/Receive: Not Supported 00:08:15.084 Format NVM: Supported 00:08:15.084 Firmware Activate/Download: Not Supported 00:08:15.084 Namespace Management: Supported 00:08:15.084 Device Self-Test: Not Supported 00:08:15.084 Directives: Supported 00:08:15.084 NVMe-MI: Not Supported 00:08:15.084 Virtualization Management: Not Supported 00:08:15.084 Doorbell Buffer Config: Supported 00:08:15.084 Get LBA Status Capability: Not Supported 00:08:15.084 Command & Feature Lockdown Capability: Not Supported 00:08:15.084 Abort Command Limit: 4 00:08:15.084 Async Event Request Limit: 4 00:08:15.084 Number of Firmware Slots: N/A 00:08:15.084 Firmware Slot 1 Read-Only: N/A 00:08:15.084 Firmware Activation Without Reset: N/A 00:08:15.084 Multiple Update Detection Support: N/A 00:08:15.084 Firmware Update Granularity: No Information Provided 00:08:15.084 Per-Namespace SMART Log: Yes 00:08:15.084 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.084 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:15.084 Command Effects Log Page: Supported 00:08:15.084 Get Log Page Extended Data: Supported 00:08:15.084 Telemetry Log Pages: Not Supported 00:08:15.084 Persistent Event Log Pages: Not Supported 00:08:15.084 Supported Log Pages Log Page: May Support 00:08:15.084 Commands Supported & Effects Log Page: Not Supported 00:08:15.084 Feature Identifiers & Effects Log Page:May Support 00:08:15.084 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.084 Data Area 4 for Telemetry Log: Not Supported 00:08:15.084 Error Log Page Entries Supported: 1 00:08:15.084 Keep Alive: Not Supported 00:08:15.084 00:08:15.084 NVM Command Set Attributes 00:08:15.084 ========================== 00:08:15.084 Submission Queue Entry Size 00:08:15.084 Max: 64 00:08:15.084 Min: 64 00:08:15.084 Completion Queue Entry Size 00:08:15.084 Max: 16 00:08:15.084 Min: 16 00:08:15.084 Number of Namespaces: 256 00:08:15.084 Compare Command: Supported 00:08:15.084 Write Uncorrectable Command: Not Supported 00:08:15.084 Dataset Management Command: Supported 00:08:15.084 Write Zeroes Command: Supported 00:08:15.084 Set Features Save Field: Supported 00:08:15.084 Reservations: Not Supported 00:08:15.084 Timestamp: Supported 00:08:15.084 Copy: Supported 00:08:15.084 Volatile Write Cache: Present 00:08:15.084 Atomic Write Unit (Normal): 1 00:08:15.084 Atomic Write Unit (PFail): 1 00:08:15.084 Atomic Compare & Write Unit: 1 00:08:15.084 Fused Compare & Write: Not Supported 00:08:15.084 Scatter-Gather List 00:08:15.084 SGL Command Set: Supported 00:08:15.084 SGL Keyed: Not Supported 00:08:15.084 SGL Bit Bucket Descriptor: Not Supported 00:08:15.084 SGL Metadata Pointer: Not Supported 00:08:15.084 Oversized SGL: Not Supported 00:08:15.084 SGL Metadata Address: Not Supported 00:08:15.084 SGL Offset: Not Supported 00:08:15.084 Transport SGL Data Block: Not Supported 00:08:15.084 Replay Protected Memory Block: Not Supported 00:08:15.084 00:08:15.084 Firmware Slot Information 00:08:15.084 ========================= 00:08:15.084 Active slot: 1 00:08:15.084 Slot 1 Firmware Revision: 1.0 00:08:15.084 00:08:15.084 00:08:15.084 Commands Supported and Effects 00:08:15.084 ============================== 00:08:15.084 Admin Commands 00:08:15.084 -------------- 00:08:15.084 Delete I/O Submission Queue (00h): Supported 00:08:15.084 Create I/O Submission Queue (01h): Supported 00:08:15.084 Get Log Page (02h): Supported 00:08:15.084 Delete I/O Completion Queue (04h): Supported 00:08:15.084 Create I/O Completion Queue (05h): Supported 00:08:15.084 Identify (06h): Supported 00:08:15.084 Abort (08h): Supported 00:08:15.084 Set Features (09h): Supported 00:08:15.084 Get Features (0Ah): Supported 00:08:15.084 Asynchronous Event Request (0Ch): Supported 00:08:15.084 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.084 Directive Send (19h): Supported 00:08:15.084 Directive Receive (1Ah): Supported 00:08:15.084 Virtualization Management (1Ch): Supported 00:08:15.085 Doorbell Buffer Config (7Ch): Supported 00:08:15.085 Format NVM (80h): Supported LBA-Change 00:08:15.085 I/O Commands 00:08:15.085 ------------ 00:08:15.085 Flush (00h): Supported LBA-Change 00:08:15.085 Write (01h): Supported LBA-Change 00:08:15.085 Read (02h): Supported 00:08:15.085 Compare (05h): Supported 00:08:15.085 Write Zeroes (08h): Supported LBA-Change 00:08:15.085 Dataset Management (09h): Supported LBA-Change 00:08:15.085 Unknown (0Ch): Supported 00:08:15.085 Unknown (12h): Supported 00:08:15.085 Copy (19h): Supported LBA-Change 00:08:15.085 Unknown (1Dh): Supported LBA-Change 00:08:15.085 00:08:15.085 Error Log 00:08:15.085 ========= 00:08:15.085 00:08:15.085 Arbitration 00:08:15.085 =========== 00:08:15.085 Arbitration Burst: no limit 00:08:15.085 00:08:15.085 Power Management 00:08:15.085 ================ 00:08:15.085 Number of Power States: 1 00:08:15.085 Current Power State: Power State #0 00:08:15.085 Power State #0: 00:08:15.085 Max Power: 25.00 W 00:08:15.085 Non-Operational State: Operational 00:08:15.085 Entry Latency: 16 microseconds 00:08:15.085 Exit Latency: 4 microseconds 00:08:15.085 Relative Read Throughput: 0 00:08:15.085 Relative Read Latency: 0 00:08:15.085 Relative Write Throughput: 0 00:08:15.085 Relative Write Latency: 0 00:08:15.085 Idle Power: Not Reported 00:08:15.085 Active Power: Not Reported 00:08:15.085 Non-Operational Permissive Mode: Not Supported 00:08:15.085 00:08:15.085 Health Information 00:08:15.085 ================== 00:08:15.085 Critical Warnings: 00:08:15.085 Available Spare Space: OK 00:08:15.085 Temperature: OK 00:08:15.085 Device Reliability: OK 00:08:15.085 Read Only: No 00:08:15.085 Volatile Memory Backup: OK 00:08:15.085 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.085 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.085 Available Spare: 0% 00:08:15.085 Available Spare Threshold: 0% 00:08:15.085 Life Percentage Used: 0% 00:08:15.085 Data Units Read: 1021 00:08:15.085 Data Units Written: 888 00:08:15.085 Host Read Commands: 53942 00:08:15.085 Host Write Commands: 52731 00:08:15.085 Controller Busy Time: 0 minutes 00:08:15.085 Power Cycles: 0 00:08:15.085 Power On Hours: 0 hours 00:08:15.085 Unsafe Shutdowns: 0 00:08:15.085 Unrecoverable Media Errors: 0 00:08:15.085 Lifetime Error Log Entries: 0 00:08:15.085 Warning Temperature Time: 0 minutes 00:08:15.085 Critical Temperature Time: 0 minutes 00:08:15.085 00:08:15.085 Number of Queues 00:08:15.085 ================ 00:08:15.085 Number of I/O Submission Queues: 64 00:08:15.085 Number of I/O Completion Queues: 64 00:08:15.085 00:08:15.085 ZNS Specific Controller Data 00:08:15.085 ============================ 00:08:15.085 Zone Append Size Limit: 0 00:08:15.085 00:08:15.085 00:08:15.085 Active Namespaces 00:08:15.085 ================= 00:08:15.085 Namespace ID:1 00:08:15.085 Error Recovery Timeout: Unlimited 00:08:15.085 Command Set Identifier: NVM (00h) 00:08:15.085 Deallocate: Supported 00:08:15.085 Deallocated/Unwritten Error: Supported 00:08:15.085 Deallocated Read Value: All 0x00 00:08:15.085 Deallocate in Write Zeroes: Not Supported 00:08:15.085 Deallocated Guard Field: 0xFFFF 00:08:15.085 Flush: Supported 00:08:15.085 Reservation: Not Supported 00:08:15.085 Namespace Sharing Capabilities: Private 00:08:15.085 Size (in LBAs): 1310720 (5GiB) 00:08:15.085 Capacity (in LBAs): 1310720 (5GiB) 00:08:15.085 Utilization (in LBAs): 1310720 (5GiB) 00:08:15.085 Thin Provisioning: Not Supported 00:08:15.085 Per-NS Atomic Units: No 00:08:15.085 Maximum Single Source Range Length: 128 00:08:15.085 Maximum Copy Length: 128 00:08:15.085 Maximum Source Range Count: 128 00:08:15.085 NGUID/EUI64 Never Reused: No 00:08:15.085 Namespace Write Protected: No 00:08:15.085 Number of LBA Formats: 8 00:08:15.085 Current LBA Format: LBA Format #04 00:08:15.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.085 00:08:15.085 NVM Specific Namespace Data 00:08:15.085 =========================== 00:08:15.085 Logical Block Storage Tag Mask: 0 00:08:15.085 Protection Information Capabilities: 00:08:15.085 16b Guard Protection Information Storage Tag Support: No 00:08:15.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.085 Storage Tag Check Read Support: No 00:08:15.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.085 ===================================================== 00:08:15.085 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.085 ===================================================== 00:08:15.085 Controller Capabilities/Features 00:08:15.085 ================================ 00:08:15.085 Vendor ID: 1b36 00:08:15.085 Subsystem Vendor ID: 1af4 00:08:15.085 Serial Number: 12343 00:08:15.085 Model Number: QEMU NVMe Ctrl 00:08:15.085 Firmware Version: 8.0.0 00:08:15.085 Recommended Arb Burst: 6 00:08:15.085 IEEE OUI Identifier: 00 54 52 00:08:15.085 Multi-path I/O 00:08:15.085 May have multiple subsystem ports: No 00:08:15.085 May have multiple controllers: Yes 00:08:15.085 Associated with SR-IOV VF: No 00:08:15.085 Max Data Transfer Size: 524288 00:08:15.085 Max Number of Namespaces: 256 00:08:15.085 Max Number of I/O Queues: 64 00:08:15.085 NVMe Specification Version (VS): 1.4 00:08:15.085 NVMe Specification Version (Identify): 1.4 00:08:15.085 Maximum Queue Entries: 2048 00:08:15.085 Contiguous Queues Required: Yes 00:08:15.085 Arbitration Mechanisms Supported 00:08:15.085 Weighted Round Robin: Not Supported 00:08:15.085 Vendor Specific: Not Supported 00:08:15.085 Reset Timeout: 7500 ms 00:08:15.085 Doorbell Stride: 4 bytes 00:08:15.085 NVM Subsystem Reset: Not Supported 00:08:15.085 Command Sets Supported 00:08:15.085 NVM Command Set: Supported 00:08:15.085 Boot Partition: Not Supported 00:08:15.085 Memory Page Size Minimum: 4096 bytes 00:08:15.085 Memory Page Size Maximum: 65536 bytes 00:08:15.085 Persistent Memory Region: Not Supported 00:08:15.085 Optional Asynchronous Events Supported 00:08:15.085 Namespace Attribute Notices: Supported 00:08:15.085 Firmware Activation Notices: Not Supported 00:08:15.085 ANA Change Notices: Not Supported 00:08:15.085 PLE Aggregate Log Change Notices: Not Supported 00:08:15.085 LBA Status Info Alert Notices: Not Supported 00:08:15.085 EGE Aggregate Log Change Notices: Not Supported 00:08:15.085 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.085 Zone Descriptor Change Notices: Not Supported 00:08:15.085 Discovery Log Change Notices: Not Supported 00:08:15.085 Controller Attributes 00:08:15.085 128-bit Host Identifier: Not Supported 00:08:15.085 Non-Operational Permissive Mode: Not Supported 00:08:15.085 NVM Sets: Not Supported 00:08:15.085 Read Recovery Levels: Not Supported 00:08:15.085 Endurance Groups: Supported 00:08:15.085 Predictable Latency Mode: Not Supported 00:08:15.085 Traffic Based Keep ALive: Not Supported 00:08:15.085 Namespace Granularity: Not Supported 00:08:15.085 SQ Associations: Not Supported 00:08:15.085 UUID List: Not Supported 00:08:15.085 Multi-Domain Subsystem: Not Supported 00:08:15.085 Fixed Capacity Management: Not Supported 00:08:15.085 Variable Capacity Management: Not Supported 00:08:15.085 Delete Endurance Group: Not Supported 00:08:15.085 Delete NVM Set: Not Supported 00:08:15.085 Extended LBA Formats Supported: Supported 00:08:15.085 Flexible Data Placement Supported: Supported 00:08:15.085 00:08:15.085 Controller Memory Buffer Support 00:08:15.085 ================================ 00:08:15.085 Supported: No 00:08:15.085 00:08:15.085 Persistent Memory Region Support 00:08:15.085 ================================ 00:08:15.085 Supported: No 00:08:15.085 00:08:15.085 Admin Command Set Attributes 00:08:15.085 ============================ 00:08:15.085 Security Send/Receive: Not Supported 00:08:15.085 Format NVM: Supported 00:08:15.085 Firmware Activate/Download: Not Supported 00:08:15.085 Namespace Management: Supported 00:08:15.085 Device Self-Test: Not Supported 00:08:15.085 Directives: Supported 00:08:15.085 NVMe-MI: Not Supported 00:08:15.085 Virtualization Management: Not Supported 00:08:15.085 Doorbell Buffer Config: Supported 00:08:15.085 Get LBA Status Capability: Not Supported 00:08:15.085 Command & Feature Lockdown Capability: Not Supported 00:08:15.085 Abort Command Limit: 4 00:08:15.085 Async Event Request Limit: 4 00:08:15.085 Number of Firmware Slots: N/A 00:08:15.085 Firmware Slot 1 Read-Only: N/A 00:08:15.085 Firmware Activation Without Reset: N/A 00:08:15.085 Multiple Update Detection Support: N/A 00:08:15.085 Firmware Update Granularity: No Information Provided 00:08:15.085 Per-Namespace SMART Log: Yes 00:08:15.085 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.085 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:15.085 Command Effects Log Page: Supported 00:08:15.085 Get Log Page Extended Data: Supported 00:08:15.085 Telemetry Log Pages: Not Supported 00:08:15.086 Persistent Event Log Pages: Not Supported 00:08:15.086 Supported Log Pages Log Page: May Support 00:08:15.086 Commands Supported & Effects Log Page: Not Supported 00:08:15.086 Feature Identifiers & Effec[2024-11-19 11:39:28.324730] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 74664 terminated unexpected 00:08:15.086 [2024-11-19 11:39:28.326240] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 74664 terminated unexpected 00:08:15.086 [2024-11-19 11:39:28.328051] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 74664 terminated unexpected 00:08:15.086 ts Log Page:May Support 00:08:15.086 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.086 Data Area 4 for Telemetry Log: Not Supported 00:08:15.086 Error Log Page Entries Supported: 1 00:08:15.086 Keep Alive: Not Supported 00:08:15.086 00:08:15.086 NVM Command Set Attributes 00:08:15.086 ========================== 00:08:15.086 Submission Queue Entry Size 00:08:15.086 Max: 64 00:08:15.086 Min: 64 00:08:15.086 Completion Queue Entry Size 00:08:15.086 Max: 16 00:08:15.086 Min: 16 00:08:15.086 Number of Namespaces: 256 00:08:15.086 Compare Command: Supported 00:08:15.086 Write Uncorrectable Command: Not Supported 00:08:15.086 Dataset Management Command: Supported 00:08:15.086 Write Zeroes Command: Supported 00:08:15.086 Set Features Save Field: Supported 00:08:15.086 Reservations: Not Supported 00:08:15.086 Timestamp: Supported 00:08:15.086 Copy: Supported 00:08:15.086 Volatile Write Cache: Present 00:08:15.086 Atomic Write Unit (Normal): 1 00:08:15.086 Atomic Write Unit (PFail): 1 00:08:15.086 Atomic Compare & Write Unit: 1 00:08:15.086 Fused Compare & Write: Not Supported 00:08:15.086 Scatter-Gather List 00:08:15.086 SGL Command Set: Supported 00:08:15.086 SGL Keyed: Not Supported 00:08:15.086 SGL Bit Bucket Descriptor: Not Supported 00:08:15.086 SGL Metadata Pointer: Not Supported 00:08:15.086 Oversized SGL: Not Supported 00:08:15.086 SGL Metadata Address: Not Supported 00:08:15.086 SGL Offset: Not Supported 00:08:15.086 Transport SGL Data Block: Not Supported 00:08:15.086 Replay Protected Memory Block: Not Supported 00:08:15.086 00:08:15.086 Firmware Slot Information 00:08:15.086 ========================= 00:08:15.086 Active slot: 1 00:08:15.086 Slot 1 Firmware Revision: 1.0 00:08:15.086 00:08:15.086 00:08:15.086 Commands Supported and Effects 00:08:15.086 ============================== 00:08:15.086 Admin Commands 00:08:15.086 -------------- 00:08:15.086 Delete I/O Submission Queue (00h): Supported 00:08:15.086 Create I/O Submission Queue (01h): Supported 00:08:15.086 Get Log Page (02h): Supported 00:08:15.086 Delete I/O Completion Queue (04h): Supported 00:08:15.086 Create I/O Completion Queue (05h): Supported 00:08:15.086 Identify (06h): Supported 00:08:15.086 Abort (08h): Supported 00:08:15.086 Set Features (09h): Supported 00:08:15.086 Get Features (0Ah): Supported 00:08:15.086 Asynchronous Event Request (0Ch): Supported 00:08:15.086 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.086 Directive Send (19h): Supported 00:08:15.086 Directive Receive (1Ah): Supported 00:08:15.086 Virtualization Management (1Ch): Supported 00:08:15.086 Doorbell Buffer Config (7Ch): Supported 00:08:15.086 Format NVM (80h): Supported LBA-Change 00:08:15.086 I/O Commands 00:08:15.086 ------------ 00:08:15.086 Flush (00h): Supported LBA-Change 00:08:15.086 Write (01h): Supported LBA-Change 00:08:15.086 Read (02h): Supported 00:08:15.086 Compare (05h): Supported 00:08:15.086 Write Zeroes (08h): Supported LBA-Change 00:08:15.086 Dataset Management (09h): Supported LBA-Change 00:08:15.086 Unknown (0Ch): Supported 00:08:15.086 Unknown (12h): Supported 00:08:15.086 Copy (19h): Supported LBA-Change 00:08:15.086 Unknown (1Dh): Supported LBA-Change 00:08:15.086 00:08:15.086 Error Log 00:08:15.086 ========= 00:08:15.086 00:08:15.086 Arbitration 00:08:15.086 =========== 00:08:15.086 Arbitration Burst: no limit 00:08:15.086 00:08:15.086 Power Management 00:08:15.086 ================ 00:08:15.086 Number of Power States: 1 00:08:15.086 Current Power State: Power State #0 00:08:15.086 Power State #0: 00:08:15.086 Max Power: 25.00 W 00:08:15.086 Non-Operational State: Operational 00:08:15.086 Entry Latency: 16 microseconds 00:08:15.086 Exit Latency: 4 microseconds 00:08:15.086 Relative Read Throughput: 0 00:08:15.086 Relative Read Latency: 0 00:08:15.086 Relative Write Throughput: 0 00:08:15.086 Relative Write Latency: 0 00:08:15.086 Idle Power: Not Reported 00:08:15.086 Active Power: Not Reported 00:08:15.086 Non-Operational Permissive Mode: Not Supported 00:08:15.086 00:08:15.086 Health Information 00:08:15.086 ================== 00:08:15.086 Critical Warnings: 00:08:15.086 Available Spare Space: OK 00:08:15.086 Temperature: OK 00:08:15.086 Device Reliability: OK 00:08:15.086 Read Only: No 00:08:15.086 Volatile Memory Backup: OK 00:08:15.086 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.086 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.086 Available Spare: 0% 00:08:15.086 Available Spare Threshold: 0% 00:08:15.086 Life Percentage Used: 0% 00:08:15.086 Data Units Read: 801 00:08:15.086 Data Units Written: 730 00:08:15.086 Host Read Commands: 37773 00:08:15.086 Host Write Commands: 37196 00:08:15.086 Controller Busy Time: 0 minutes 00:08:15.086 Power Cycles: 0 00:08:15.086 Power On Hours: 0 hours 00:08:15.086 Unsafe Shutdowns: 0 00:08:15.086 Unrecoverable Media Errors: 0 00:08:15.086 Lifetime Error Log Entries: 0 00:08:15.086 Warning Temperature Time: 0 minutes 00:08:15.086 Critical Temperature Time: 0 minutes 00:08:15.086 00:08:15.086 Number of Queues 00:08:15.086 ================ 00:08:15.086 Number of I/O Submission Queues: 64 00:08:15.086 Number of I/O Completion Queues: 64 00:08:15.086 00:08:15.086 ZNS Specific Controller Data 00:08:15.086 ============================ 00:08:15.086 Zone Append Size Limit: 0 00:08:15.086 00:08:15.086 00:08:15.086 Active Namespaces 00:08:15.086 ================= 00:08:15.086 Namespace ID:1 00:08:15.086 Error Recovery Timeout: Unlimited 00:08:15.086 Command Set Identifier: NVM (00h) 00:08:15.086 Deallocate: Supported 00:08:15.086 Deallocated/Unwritten Error: Supported 00:08:15.086 Deallocated Read Value: All 0x00 00:08:15.086 Deallocate in Write Zeroes: Not Supported 00:08:15.086 Deallocated Guard Field: 0xFFFF 00:08:15.086 Flush: Supported 00:08:15.086 Reservation: Not Supported 00:08:15.086 Namespace Sharing Capabilities: Multiple Controllers 00:08:15.086 Size (in LBAs): 262144 (1GiB) 00:08:15.086 Capacity (in LBAs): 262144 (1GiB) 00:08:15.086 Utilization (in LBAs): 262144 (1GiB) 00:08:15.086 Thin Provisioning: Not Supported 00:08:15.086 Per-NS Atomic Units: No 00:08:15.086 Maximum Single Source Range Length: 128 00:08:15.086 Maximum Copy Length: 128 00:08:15.086 Maximum Source Range Count: 128 00:08:15.086 NGUID/EUI64 Never Reused: No 00:08:15.086 Namespace Write Protected: No 00:08:15.086 Endurance group ID: 1 00:08:15.086 Number of LBA Formats: 8 00:08:15.086 Current LBA Format: LBA Format #04 00:08:15.086 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.086 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.086 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.086 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.086 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.086 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.086 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.086 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.086 00:08:15.086 Get Feature FDP: 00:08:15.086 ================ 00:08:15.086 Enabled: Yes 00:08:15.086 FDP configuration index: 0 00:08:15.086 00:08:15.086 FDP configurations log page 00:08:15.086 =========================== 00:08:15.086 Number of FDP configurations: 1 00:08:15.086 Version: 0 00:08:15.086 Size: 112 00:08:15.086 FDP Configuration Descriptor: 0 00:08:15.086 Descriptor Size: 96 00:08:15.086 Reclaim Group Identifier format: 2 00:08:15.086 FDP Volatile Write Cache: Not Present 00:08:15.086 FDP Configuration: Valid 00:08:15.086 Vendor Specific Size: 0 00:08:15.086 Number of Reclaim Groups: 2 00:08:15.086 Number of Recalim Unit Handles: 8 00:08:15.086 Max Placement Identifiers: 128 00:08:15.086 Number of Namespaces Suppprted: 256 00:08:15.086 Reclaim unit Nominal Size: 6000000 bytes 00:08:15.086 Estimated Reclaim Unit Time Limit: Not Reported 00:08:15.086 RUH Desc #000: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #001: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #002: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #003: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #004: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #005: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #006: RUH Type: Initially Isolated 00:08:15.086 RUH Desc #007: RUH Type: Initially Isolated 00:08:15.086 00:08:15.086 FDP reclaim unit handle usage log page 00:08:15.086 ====================================== 00:08:15.086 Number of Reclaim Unit Handles: 8 00:08:15.086 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:15.086 RUH Usage Desc #001: RUH Attributes: Unused 00:08:15.086 RUH Usage Desc #002: RUH Attributes: Unused 00:08:15.086 RUH Usage Desc #003: RUH Attributes: Unused 00:08:15.086 RUH Usage Desc #004: RUH Attributes: Unused 00:08:15.086 RUH Usage Desc #005: RUH Attributes: Unused 00:08:15.086 RUH Usage Desc #006: RUH Attributes: Unused 00:08:15.086 RUH Usage Desc #007: RUH Attributes: Unused 00:08:15.086 00:08:15.086 FDP statistics log page 00:08:15.086 ======================= 00:08:15.086 Host bytes with metadata written: 464887808 00:08:15.086 Medi[2024-11-19 11:39:28.331485] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 74664 terminated unexpected 00:08:15.086 a bytes with metadata written: 464941056 00:08:15.086 Media bytes erased: 0 00:08:15.086 00:08:15.086 FDP events log page 00:08:15.086 =================== 00:08:15.086 Number of FDP events: 0 00:08:15.086 00:08:15.086 NVM Specific Namespace Data 00:08:15.086 =========================== 00:08:15.086 Logical Block Storage Tag Mask: 0 00:08:15.086 Protection Information Capabilities: 00:08:15.086 16b Guard Protection Information Storage Tag Support: No 00:08:15.086 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.086 Storage Tag Check Read Support: No 00:08:15.086 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.086 ===================================================== 00:08:15.086 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.086 ===================================================== 00:08:15.086 Controller Capabilities/Features 00:08:15.086 ================================ 00:08:15.086 Vendor ID: 1b36 00:08:15.086 Subsystem Vendor ID: 1af4 00:08:15.086 Serial Number: 12342 00:08:15.086 Model Number: QEMU NVMe Ctrl 00:08:15.086 Firmware Version: 8.0.0 00:08:15.086 Recommended Arb Burst: 6 00:08:15.086 IEEE OUI Identifier: 00 54 52 00:08:15.086 Multi-path I/O 00:08:15.086 May have multiple subsystem ports: No 00:08:15.086 May have multiple controllers: No 00:08:15.086 Associated with SR-IOV VF: No 00:08:15.086 Max Data Transfer Size: 524288 00:08:15.086 Max Number of Namespaces: 256 00:08:15.086 Max Number of I/O Queues: 64 00:08:15.086 NVMe Specification Version (VS): 1.4 00:08:15.086 NVMe Specification Version (Identify): 1.4 00:08:15.087 Maximum Queue Entries: 2048 00:08:15.087 Contiguous Queues Required: Yes 00:08:15.087 Arbitration Mechanisms Supported 00:08:15.087 Weighted Round Robin: Not Supported 00:08:15.087 Vendor Specific: Not Supported 00:08:15.087 Reset Timeout: 7500 ms 00:08:15.087 Doorbell Stride: 4 bytes 00:08:15.087 NVM Subsystem Reset: Not Supported 00:08:15.087 Command Sets Supported 00:08:15.087 NVM Command Set: Supported 00:08:15.087 Boot Partition: Not Supported 00:08:15.087 Memory Page Size Minimum: 4096 bytes 00:08:15.087 Memory Page Size Maximum: 65536 bytes 00:08:15.087 Persistent Memory Region: Not Supported 00:08:15.087 Optional Asynchronous Events Supported 00:08:15.087 Namespace Attribute Notices: Supported 00:08:15.087 Firmware Activation Notices: Not Supported 00:08:15.087 ANA Change Notices: Not Supported 00:08:15.087 PLE Aggregate Log Change Notices: Not Supported 00:08:15.087 LBA Status Info Alert Notices: Not Supported 00:08:15.087 EGE Aggregate Log Change Notices: Not Supported 00:08:15.087 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.087 Zone Descriptor Change Notices: Not Supported 00:08:15.087 Discovery Log Change Notices: Not Supported 00:08:15.087 Controller Attributes 00:08:15.087 128-bit Host Identifier: Not Supported 00:08:15.087 Non-Operational Permissive Mode: Not Supported 00:08:15.087 NVM Sets: Not Supported 00:08:15.087 Read Recovery Levels: Not Supported 00:08:15.087 Endurance Groups: Not Supported 00:08:15.087 Predictable Latency Mode: Not Supported 00:08:15.087 Traffic Based Keep ALive: Not Supported 00:08:15.087 Namespace Granularity: Not Supported 00:08:15.087 SQ Associations: Not Supported 00:08:15.087 UUID List: Not Supported 00:08:15.087 Multi-Domain Subsystem: Not Supported 00:08:15.087 Fixed Capacity Management: Not Supported 00:08:15.087 Variable Capacity Management: Not Supported 00:08:15.087 Delete Endurance Group: Not Supported 00:08:15.087 Delete NVM Set: Not Supported 00:08:15.087 Extended LBA Formats Supported: Supported 00:08:15.087 Flexible Data Placement Supported: Not Supported 00:08:15.087 00:08:15.087 Controller Memory Buffer Support 00:08:15.087 ================================ 00:08:15.087 Supported: No 00:08:15.087 00:08:15.087 Persistent Memory Region Support 00:08:15.087 ================================ 00:08:15.087 Supported: No 00:08:15.087 00:08:15.087 Admin Command Set Attributes 00:08:15.087 ============================ 00:08:15.087 Security Send/Receive: Not Supported 00:08:15.087 Format NVM: Supported 00:08:15.087 Firmware Activate/Download: Not Supported 00:08:15.087 Namespace Management: Supported 00:08:15.087 Device Self-Test: Not Supported 00:08:15.087 Directives: Supported 00:08:15.087 NVMe-MI: Not Supported 00:08:15.087 Virtualization Management: Not Supported 00:08:15.087 Doorbell Buffer Config: Supported 00:08:15.087 Get LBA Status Capability: Not Supported 00:08:15.087 Command & Feature Lockdown Capability: Not Supported 00:08:15.087 Abort Command Limit: 4 00:08:15.087 Async Event Request Limit: 4 00:08:15.087 Number of Firmware Slots: N/A 00:08:15.087 Firmware Slot 1 Read-Only: N/A 00:08:15.087 Firmware Activation Without Reset: N/A 00:08:15.087 Multiple Update Detection Support: N/A 00:08:15.087 Firmware Update Granularity: No Information Provided 00:08:15.087 Per-Namespace SMART Log: Yes 00:08:15.087 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.087 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:15.087 Command Effects Log Page: Supported 00:08:15.087 Get Log Page Extended Data: Supported 00:08:15.087 Telemetry Log Pages: Not Supported 00:08:15.087 Persistent Event Log Pages: Not Supported 00:08:15.087 Supported Log Pages Log Page: May Support 00:08:15.087 Commands Supported & Effects Log Page: Not Supported 00:08:15.087 Feature Identifiers & Effects Log Page:May Support 00:08:15.087 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.087 Data Area 4 for Telemetry Log: Not Supported 00:08:15.087 Error Log Page Entries Supported: 1 00:08:15.087 Keep Alive: Not Supported 00:08:15.087 00:08:15.087 NVM Command Set Attributes 00:08:15.087 ========================== 00:08:15.087 Submission Queue Entry Size 00:08:15.087 Max: 64 00:08:15.087 Min: 64 00:08:15.087 Completion Queue Entry Size 00:08:15.087 Max: 16 00:08:15.087 Min: 16 00:08:15.087 Number of Namespaces: 256 00:08:15.087 Compare Command: Supported 00:08:15.087 Write Uncorrectable Command: Not Supported 00:08:15.087 Dataset Management Command: Supported 00:08:15.087 Write Zeroes Command: Supported 00:08:15.087 Set Features Save Field: Supported 00:08:15.087 Reservations: Not Supported 00:08:15.087 Timestamp: Supported 00:08:15.087 Copy: Supported 00:08:15.087 Volatile Write Cache: Present 00:08:15.087 Atomic Write Unit (Normal): 1 00:08:15.087 Atomic Write Unit (PFail): 1 00:08:15.087 Atomic Compare & Write Unit: 1 00:08:15.087 Fused Compare & Write: Not Supported 00:08:15.087 Scatter-Gather List 00:08:15.087 SGL Command Set: Supported 00:08:15.087 SGL Keyed: Not Supported 00:08:15.087 SGL Bit Bucket Descriptor: Not Supported 00:08:15.087 SGL Metadata Pointer: Not Supported 00:08:15.087 Oversized SGL: Not Supported 00:08:15.087 SGL Metadata Address: Not Supported 00:08:15.087 SGL Offset: Not Supported 00:08:15.087 Transport SGL Data Block: Not Supported 00:08:15.087 Replay Protected Memory Block: Not Supported 00:08:15.087 00:08:15.087 Firmware Slot Information 00:08:15.087 ========================= 00:08:15.087 Active slot: 1 00:08:15.087 Slot 1 Firmware Revision: 1.0 00:08:15.087 00:08:15.087 00:08:15.087 Commands Supported and Effects 00:08:15.087 ============================== 00:08:15.087 Admin Commands 00:08:15.087 -------------- 00:08:15.087 Delete I/O Submission Queue (00h): Supported 00:08:15.087 Create I/O Submission Queue (01h): Supported 00:08:15.087 Get Log Page (02h): Supported 00:08:15.087 Delete I/O Completion Queue (04h): Supported 00:08:15.087 Create I/O Completion Queue (05h): Supported 00:08:15.087 Identify (06h): Supported 00:08:15.087 Abort (08h): Supported 00:08:15.087 Set Features (09h): Supported 00:08:15.087 Get Features (0Ah): Supported 00:08:15.087 Asynchronous Event Request (0Ch): Supported 00:08:15.087 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.087 Directive Send (19h): Supported 00:08:15.087 Directive Receive (1Ah): Supported 00:08:15.087 Virtualization Management (1Ch): Supported 00:08:15.087 Doorbell Buffer Config (7Ch): Supported 00:08:15.087 Format NVM (80h): Supported LBA-Change 00:08:15.087 I/O Commands 00:08:15.087 ------------ 00:08:15.087 Flush (00h): Supported LBA-Change 00:08:15.087 Write (01h): Supported LBA-Change 00:08:15.087 Read (02h): Supported 00:08:15.087 Compare (05h): Supported 00:08:15.087 Write Zeroes (08h): Supported LBA-Change 00:08:15.087 Dataset Management (09h): Supported LBA-Change 00:08:15.087 Unknown (0Ch): Supported 00:08:15.087 Unknown (12h): Supported 00:08:15.087 Copy (19h): Supported LBA-Change 00:08:15.087 Unknown (1Dh): Supported LBA-Change 00:08:15.087 00:08:15.087 Error Log 00:08:15.087 ========= 00:08:15.087 00:08:15.087 Arbitration 00:08:15.087 =========== 00:08:15.087 Arbitration Burst: no limit 00:08:15.087 00:08:15.087 Power Management 00:08:15.087 ================ 00:08:15.087 Number of Power States: 1 00:08:15.087 Current Power State: Power State #0 00:08:15.087 Power State #0: 00:08:15.087 Max Power: 25.00 W 00:08:15.087 Non-Operational State: Operational 00:08:15.087 Entry Latency: 16 microseconds 00:08:15.087 Exit Latency: 4 microseconds 00:08:15.087 Relative Read Throughput: 0 00:08:15.087 Relative Read Latency: 0 00:08:15.087 Relative Write Throughput: 0 00:08:15.087 Relative Write Latency: 0 00:08:15.087 Idle Power: Not Reported 00:08:15.087 Active Power: Not Reported 00:08:15.087 Non-Operational Permissive Mode: Not Supported 00:08:15.087 00:08:15.087 Health Information 00:08:15.087 ================== 00:08:15.087 Critical Warnings: 00:08:15.087 Available Spare Space: OK 00:08:15.087 Temperature: OK 00:08:15.087 Device Reliability: OK 00:08:15.087 Read Only: No 00:08:15.087 Volatile Memory Backup: OK 00:08:15.088 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.088 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.088 Available Spare: 0% 00:08:15.088 Available Spare Threshold: 0% 00:08:15.088 Life Percentage Used: 0% 00:08:15.088 Data Units Read: 2126 00:08:15.088 Data Units Written: 1914 00:08:15.088 Host Read Commands: 110872 00:08:15.088 Host Write Commands: 109141 00:08:15.088 Controller Busy Time: 0 minutes 00:08:15.088 Power Cycles: 0 00:08:15.088 Power On Hours: 0 hours 00:08:15.088 Unsafe Shutdowns: 0 00:08:15.088 Unrecoverable Media Errors: 0 00:08:15.088 Lifetime Error Log Entries: 0 00:08:15.088 Warning Temperature Time: 0 minutes 00:08:15.088 Critical Temperature Time: 0 minutes 00:08:15.088 00:08:15.088 Number of Queues 00:08:15.088 ================ 00:08:15.088 Number of I/O Submission Queues: 64 00:08:15.088 Number of I/O Completion Queues: 64 00:08:15.088 00:08:15.088 ZNS Specific Controller Data 00:08:15.088 ============================ 00:08:15.088 Zone Append Size Limit: 0 00:08:15.088 00:08:15.088 00:08:15.088 Active Namespaces 00:08:15.088 ================= 00:08:15.088 Namespace ID:1 00:08:15.088 Error Recovery Timeout: Unlimited 00:08:15.088 Command Set Identifier: NVM (00h) 00:08:15.088 Deallocate: Supported 00:08:15.088 Deallocated/Unwritten Error: Supported 00:08:15.088 Deallocated Read Value: All 0x00 00:08:15.088 Deallocate in Write Zeroes: Not Supported 00:08:15.088 Deallocated Guard Field: 0xFFFF 00:08:15.088 Flush: Supported 00:08:15.088 Reservation: Not Supported 00:08:15.088 Namespace Sharing Capabilities: Private 00:08:15.088 Size (in LBAs): 1048576 (4GiB) 00:08:15.088 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.088 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.088 Thin Provisioning: Not Supported 00:08:15.088 Per-NS Atomic Units: No 00:08:15.088 Maximum Single Source Range Length: 128 00:08:15.088 Maximum Copy Length: 128 00:08:15.088 Maximum Source Range Count: 128 00:08:15.088 NGUID/EUI64 Never Reused: No 00:08:15.088 Namespace Write Protected: No 00:08:15.088 Number of LBA Formats: 8 00:08:15.088 Current LBA Format: LBA Format #04 00:08:15.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.088 00:08:15.088 NVM Specific Namespace Data 00:08:15.088 =========================== 00:08:15.088 Logical Block Storage Tag Mask: 0 00:08:15.088 Protection Information Capabilities: 00:08:15.088 16b Guard Protection Information Storage Tag Support: No 00:08:15.088 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.088 Storage Tag Check Read Support: No 00:08:15.088 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Namespace ID:2 00:08:15.088 Error Recovery Timeout: Unlimited 00:08:15.088 Command Set Identifier: NVM (00h) 00:08:15.088 Deallocate: Supported 00:08:15.088 Deallocated/Unwritten Error: Supported 00:08:15.088 Deallocated Read Value: All 0x00 00:08:15.088 Deallocate in Write Zeroes: Not Supported 00:08:15.088 Deallocated Guard Field: 0xFFFF 00:08:15.088 Flush: Supported 00:08:15.088 Reservation: Not Supported 00:08:15.088 Namespace Sharing Capabilities: Private 00:08:15.088 Size (in LBAs): 1048576 (4GiB) 00:08:15.088 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.088 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.088 Thin Provisioning: Not Supported 00:08:15.088 Per-NS Atomic Units: No 00:08:15.088 Maximum Single Source Range Length: 128 00:08:15.088 Maximum Copy Length: 128 00:08:15.088 Maximum Source Range Count: 128 00:08:15.088 NGUID/EUI64 Never Reused: No 00:08:15.088 Namespace Write Protected: No 00:08:15.088 Number of LBA Formats: 8 00:08:15.088 Current LBA Format: LBA Format #04 00:08:15.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.088 00:08:15.088 NVM Specific Namespace Data 00:08:15.088 =========================== 00:08:15.088 Logical Block Storage Tag Mask: 0 00:08:15.088 Protection Information Capabilities: 00:08:15.088 16b Guard Protection Information Storage Tag Support: No 00:08:15.088 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.088 Storage Tag Check Read Support: No 00:08:15.088 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Namespace ID:3 00:08:15.088 Error Recovery Timeout: Unlimited 00:08:15.088 Command Set Identifier: NVM (00h) 00:08:15.088 Deallocate: Supported 00:08:15.088 Deallocated/Unwritten Error: Supported 00:08:15.088 Deallocated Read Value: All 0x00 00:08:15.088 Deallocate in Write Zeroes: Not Supported 00:08:15.088 Deallocated Guard Field: 0xFFFF 00:08:15.088 Flush: Supported 00:08:15.088 Reservation: Not Supported 00:08:15.088 Namespace Sharing Capabilities: Private 00:08:15.088 Size (in LBAs): 1048576 (4GiB) 00:08:15.088 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.088 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.088 Thin Provisioning: Not Supported 00:08:15.088 Per-NS Atomic Units: No 00:08:15.088 Maximum Single Source Range Length: 128 00:08:15.088 Maximum Copy Length: 128 00:08:15.088 Maximum Source Range Count: 128 00:08:15.088 NGUID/EUI64 Never Reused: No 00:08:15.088 Namespace Write Protected: No 00:08:15.088 Number of LBA Formats: 8 00:08:15.088 Current LBA Format: LBA Format #04 00:08:15.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.088 00:08:15.088 NVM Specific Namespace Data 00:08:15.088 =========================== 00:08:15.088 Logical Block Storage Tag Mask: 0 00:08:15.088 Protection Information Capabilities: 00:08:15.088 16b Guard Protection Information Storage Tag Support: No 00:08:15.088 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.088 Storage Tag Check Read Support: No 00:08:15.088 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.088 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.088 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:15.346 ===================================================== 00:08:15.346 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.346 ===================================================== 00:08:15.346 Controller Capabilities/Features 00:08:15.346 ================================ 00:08:15.346 Vendor ID: 1b36 00:08:15.346 Subsystem Vendor ID: 1af4 00:08:15.346 Serial Number: 12340 00:08:15.346 Model Number: QEMU NVMe Ctrl 00:08:15.346 Firmware Version: 8.0.0 00:08:15.346 Recommended Arb Burst: 6 00:08:15.346 IEEE OUI Identifier: 00 54 52 00:08:15.346 Multi-path I/O 00:08:15.346 May have multiple subsystem ports: No 00:08:15.346 May have multiple controllers: No 00:08:15.346 Associated with SR-IOV VF: No 00:08:15.346 Max Data Transfer Size: 524288 00:08:15.346 Max Number of Namespaces: 256 00:08:15.346 Max Number of I/O Queues: 64 00:08:15.346 NVMe Specification Version (VS): 1.4 00:08:15.346 NVMe Specification Version (Identify): 1.4 00:08:15.346 Maximum Queue Entries: 2048 00:08:15.346 Contiguous Queues Required: Yes 00:08:15.346 Arbitration Mechanisms Supported 00:08:15.346 Weighted Round Robin: Not Supported 00:08:15.346 Vendor Specific: Not Supported 00:08:15.346 Reset Timeout: 7500 ms 00:08:15.346 Doorbell Stride: 4 bytes 00:08:15.346 NVM Subsystem Reset: Not Supported 00:08:15.346 Command Sets Supported 00:08:15.346 NVM Command Set: Supported 00:08:15.346 Boot Partition: Not Supported 00:08:15.346 Memory Page Size Minimum: 4096 bytes 00:08:15.346 Memory Page Size Maximum: 65536 bytes 00:08:15.346 Persistent Memory Region: Not Supported 00:08:15.347 Optional Asynchronous Events Supported 00:08:15.347 Namespace Attribute Notices: Supported 00:08:15.347 Firmware Activation Notices: Not Supported 00:08:15.347 ANA Change Notices: Not Supported 00:08:15.347 PLE Aggregate Log Change Notices: Not Supported 00:08:15.347 LBA Status Info Alert Notices: Not Supported 00:08:15.347 EGE Aggregate Log Change Notices: Not Supported 00:08:15.347 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.347 Zone Descriptor Change Notices: Not Supported 00:08:15.347 Discovery Log Change Notices: Not Supported 00:08:15.347 Controller Attributes 00:08:15.347 128-bit Host Identifier: Not Supported 00:08:15.347 Non-Operational Permissive Mode: Not Supported 00:08:15.347 NVM Sets: Not Supported 00:08:15.347 Read Recovery Levels: Not Supported 00:08:15.347 Endurance Groups: Not Supported 00:08:15.347 Predictable Latency Mode: Not Supported 00:08:15.347 Traffic Based Keep ALive: Not Supported 00:08:15.347 Namespace Granularity: Not Supported 00:08:15.347 SQ Associations: Not Supported 00:08:15.347 UUID List: Not Supported 00:08:15.347 Multi-Domain Subsystem: Not Supported 00:08:15.347 Fixed Capacity Management: Not Supported 00:08:15.347 Variable Capacity Management: Not Supported 00:08:15.347 Delete Endurance Group: Not Supported 00:08:15.347 Delete NVM Set: Not Supported 00:08:15.347 Extended LBA Formats Supported: Supported 00:08:15.347 Flexible Data Placement Supported: Not Supported 00:08:15.347 00:08:15.347 Controller Memory Buffer Support 00:08:15.347 ================================ 00:08:15.347 Supported: No 00:08:15.347 00:08:15.347 Persistent Memory Region Support 00:08:15.347 ================================ 00:08:15.347 Supported: No 00:08:15.347 00:08:15.347 Admin Command Set Attributes 00:08:15.347 ============================ 00:08:15.347 Security Send/Receive: Not Supported 00:08:15.347 Format NVM: Supported 00:08:15.347 Firmware Activate/Download: Not Supported 00:08:15.347 Namespace Management: Supported 00:08:15.347 Device Self-Test: Not Supported 00:08:15.347 Directives: Supported 00:08:15.347 NVMe-MI: Not Supported 00:08:15.347 Virtualization Management: Not Supported 00:08:15.347 Doorbell Buffer Config: Supported 00:08:15.347 Get LBA Status Capability: Not Supported 00:08:15.347 Command & Feature Lockdown Capability: Not Supported 00:08:15.347 Abort Command Limit: 4 00:08:15.347 Async Event Request Limit: 4 00:08:15.347 Number of Firmware Slots: N/A 00:08:15.347 Firmware Slot 1 Read-Only: N/A 00:08:15.347 Firmware Activation Without Reset: N/A 00:08:15.347 Multiple Update Detection Support: N/A 00:08:15.347 Firmware Update Granularity: No Information Provided 00:08:15.347 Per-Namespace SMART Log: Yes 00:08:15.347 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.347 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:15.347 Command Effects Log Page: Supported 00:08:15.347 Get Log Page Extended Data: Supported 00:08:15.347 Telemetry Log Pages: Not Supported 00:08:15.347 Persistent Event Log Pages: Not Supported 00:08:15.347 Supported Log Pages Log Page: May Support 00:08:15.347 Commands Supported & Effects Log Page: Not Supported 00:08:15.347 Feature Identifiers & Effects Log Page:May Support 00:08:15.347 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.347 Data Area 4 for Telemetry Log: Not Supported 00:08:15.347 Error Log Page Entries Supported: 1 00:08:15.347 Keep Alive: Not Supported 00:08:15.347 00:08:15.347 NVM Command Set Attributes 00:08:15.347 ========================== 00:08:15.347 Submission Queue Entry Size 00:08:15.347 Max: 64 00:08:15.347 Min: 64 00:08:15.347 Completion Queue Entry Size 00:08:15.347 Max: 16 00:08:15.347 Min: 16 00:08:15.347 Number of Namespaces: 256 00:08:15.347 Compare Command: Supported 00:08:15.347 Write Uncorrectable Command: Not Supported 00:08:15.347 Dataset Management Command: Supported 00:08:15.347 Write Zeroes Command: Supported 00:08:15.347 Set Features Save Field: Supported 00:08:15.347 Reservations: Not Supported 00:08:15.347 Timestamp: Supported 00:08:15.347 Copy: Supported 00:08:15.347 Volatile Write Cache: Present 00:08:15.347 Atomic Write Unit (Normal): 1 00:08:15.347 Atomic Write Unit (PFail): 1 00:08:15.347 Atomic Compare & Write Unit: 1 00:08:15.347 Fused Compare & Write: Not Supported 00:08:15.347 Scatter-Gather List 00:08:15.347 SGL Command Set: Supported 00:08:15.347 SGL Keyed: Not Supported 00:08:15.347 SGL Bit Bucket Descriptor: Not Supported 00:08:15.347 SGL Metadata Pointer: Not Supported 00:08:15.347 Oversized SGL: Not Supported 00:08:15.347 SGL Metadata Address: Not Supported 00:08:15.347 SGL Offset: Not Supported 00:08:15.347 Transport SGL Data Block: Not Supported 00:08:15.347 Replay Protected Memory Block: Not Supported 00:08:15.347 00:08:15.347 Firmware Slot Information 00:08:15.347 ========================= 00:08:15.347 Active slot: 1 00:08:15.347 Slot 1 Firmware Revision: 1.0 00:08:15.347 00:08:15.347 00:08:15.347 Commands Supported and Effects 00:08:15.347 ============================== 00:08:15.347 Admin Commands 00:08:15.347 -------------- 00:08:15.347 Delete I/O Submission Queue (00h): Supported 00:08:15.347 Create I/O Submission Queue (01h): Supported 00:08:15.347 Get Log Page (02h): Supported 00:08:15.347 Delete I/O Completion Queue (04h): Supported 00:08:15.347 Create I/O Completion Queue (05h): Supported 00:08:15.347 Identify (06h): Supported 00:08:15.347 Abort (08h): Supported 00:08:15.347 Set Features (09h): Supported 00:08:15.347 Get Features (0Ah): Supported 00:08:15.347 Asynchronous Event Request (0Ch): Supported 00:08:15.347 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.347 Directive Send (19h): Supported 00:08:15.347 Directive Receive (1Ah): Supported 00:08:15.347 Virtualization Management (1Ch): Supported 00:08:15.347 Doorbell Buffer Config (7Ch): Supported 00:08:15.347 Format NVM (80h): Supported LBA-Change 00:08:15.347 I/O Commands 00:08:15.347 ------------ 00:08:15.347 Flush (00h): Supported LBA-Change 00:08:15.347 Write (01h): Supported LBA-Change 00:08:15.347 Read (02h): Supported 00:08:15.347 Compare (05h): Supported 00:08:15.347 Write Zeroes (08h): Supported LBA-Change 00:08:15.347 Dataset Management (09h): Supported LBA-Change 00:08:15.347 Unknown (0Ch): Supported 00:08:15.347 Unknown (12h): Supported 00:08:15.347 Copy (19h): Supported LBA-Change 00:08:15.347 Unknown (1Dh): Supported LBA-Change 00:08:15.347 00:08:15.347 Error Log 00:08:15.347 ========= 00:08:15.347 00:08:15.347 Arbitration 00:08:15.347 =========== 00:08:15.347 Arbitration Burst: no limit 00:08:15.347 00:08:15.347 Power Management 00:08:15.347 ================ 00:08:15.347 Number of Power States: 1 00:08:15.347 Current Power State: Power State #0 00:08:15.347 Power State #0: 00:08:15.347 Max Power: 25.00 W 00:08:15.347 Non-Operational State: Operational 00:08:15.347 Entry Latency: 16 microseconds 00:08:15.347 Exit Latency: 4 microseconds 00:08:15.347 Relative Read Throughput: 0 00:08:15.347 Relative Read Latency: 0 00:08:15.347 Relative Write Throughput: 0 00:08:15.347 Relative Write Latency: 0 00:08:15.347 Idle Power: Not Reported 00:08:15.347 Active Power: Not Reported 00:08:15.347 Non-Operational Permissive Mode: Not Supported 00:08:15.347 00:08:15.347 Health Information 00:08:15.347 ================== 00:08:15.347 Critical Warnings: 00:08:15.347 Available Spare Space: OK 00:08:15.347 Temperature: OK 00:08:15.347 Device Reliability: OK 00:08:15.347 Read Only: No 00:08:15.347 Volatile Memory Backup: OK 00:08:15.347 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.347 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.347 Available Spare: 0% 00:08:15.347 Available Spare Threshold: 0% 00:08:15.347 Life Percentage Used: 0% 00:08:15.347 Data Units Read: 671 00:08:15.347 Data Units Written: 599 00:08:15.347 Host Read Commands: 36403 00:08:15.347 Host Write Commands: 36189 00:08:15.347 Controller Busy Time: 0 minutes 00:08:15.347 Power Cycles: 0 00:08:15.347 Power On Hours: 0 hours 00:08:15.347 Unsafe Shutdowns: 0 00:08:15.347 Unrecoverable Media Errors: 0 00:08:15.347 Lifetime Error Log Entries: 0 00:08:15.347 Warning Temperature Time: 0 minutes 00:08:15.347 Critical Temperature Time: 0 minutes 00:08:15.347 00:08:15.347 Number of Queues 00:08:15.347 ================ 00:08:15.347 Number of I/O Submission Queues: 64 00:08:15.347 Number of I/O Completion Queues: 64 00:08:15.347 00:08:15.347 ZNS Specific Controller Data 00:08:15.347 ============================ 00:08:15.347 Zone Append Size Limit: 0 00:08:15.347 00:08:15.347 00:08:15.347 Active Namespaces 00:08:15.348 ================= 00:08:15.348 Namespace ID:1 00:08:15.348 Error Recovery Timeout: Unlimited 00:08:15.348 Command Set Identifier: NVM (00h) 00:08:15.348 Deallocate: Supported 00:08:15.348 Deallocated/Unwritten Error: Supported 00:08:15.348 Deallocated Read Value: All 0x00 00:08:15.348 Deallocate in Write Zeroes: Not Supported 00:08:15.348 Deallocated Guard Field: 0xFFFF 00:08:15.348 Flush: Supported 00:08:15.348 Reservation: Not Supported 00:08:15.348 Metadata Transferred as: Separate Metadata Buffer 00:08:15.348 Namespace Sharing Capabilities: Private 00:08:15.348 Size (in LBAs): 1548666 (5GiB) 00:08:15.348 Capacity (in LBAs): 1548666 (5GiB) 00:08:15.348 Utilization (in LBAs): 1548666 (5GiB) 00:08:15.348 Thin Provisioning: Not Supported 00:08:15.348 Per-NS Atomic Units: No 00:08:15.348 Maximum Single Source Range Length: 128 00:08:15.348 Maximum Copy Length: 128 00:08:15.348 Maximum Source Range Count: 128 00:08:15.348 NGUID/EUI64 Never Reused: No 00:08:15.348 Namespace Write Protected: No 00:08:15.348 Number of LBA Formats: 8 00:08:15.348 Current LBA Format: LBA Format #07 00:08:15.348 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.348 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.348 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.348 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.348 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.348 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.348 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.348 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.348 00:08:15.348 NVM Specific Namespace Data 00:08:15.348 =========================== 00:08:15.348 Logical Block Storage Tag Mask: 0 00:08:15.348 Protection Information Capabilities: 00:08:15.348 16b Guard Protection Information Storage Tag Support: No 00:08:15.348 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.348 Storage Tag Check Read Support: No 00:08:15.348 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.348 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.348 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:15.348 ===================================================== 00:08:15.348 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.348 ===================================================== 00:08:15.348 Controller Capabilities/Features 00:08:15.348 ================================ 00:08:15.348 Vendor ID: 1b36 00:08:15.348 Subsystem Vendor ID: 1af4 00:08:15.348 Serial Number: 12341 00:08:15.348 Model Number: QEMU NVMe Ctrl 00:08:15.348 Firmware Version: 8.0.0 00:08:15.348 Recommended Arb Burst: 6 00:08:15.348 IEEE OUI Identifier: 00 54 52 00:08:15.348 Multi-path I/O 00:08:15.348 May have multiple subsystem ports: No 00:08:15.348 May have multiple controllers: No 00:08:15.348 Associated with SR-IOV VF: No 00:08:15.348 Max Data Transfer Size: 524288 00:08:15.348 Max Number of Namespaces: 256 00:08:15.348 Max Number of I/O Queues: 64 00:08:15.348 NVMe Specification Version (VS): 1.4 00:08:15.348 NVMe Specification Version (Identify): 1.4 00:08:15.348 Maximum Queue Entries: 2048 00:08:15.348 Contiguous Queues Required: Yes 00:08:15.348 Arbitration Mechanisms Supported 00:08:15.348 Weighted Round Robin: Not Supported 00:08:15.348 Vendor Specific: Not Supported 00:08:15.348 Reset Timeout: 7500 ms 00:08:15.348 Doorbell Stride: 4 bytes 00:08:15.348 NVM Subsystem Reset: Not Supported 00:08:15.348 Command Sets Supported 00:08:15.348 NVM Command Set: Supported 00:08:15.348 Boot Partition: Not Supported 00:08:15.348 Memory Page Size Minimum: 4096 bytes 00:08:15.348 Memory Page Size Maximum: 65536 bytes 00:08:15.348 Persistent Memory Region: Not Supported 00:08:15.348 Optional Asynchronous Events Supported 00:08:15.348 Namespace Attribute Notices: Supported 00:08:15.348 Firmware Activation Notices: Not Supported 00:08:15.348 ANA Change Notices: Not Supported 00:08:15.348 PLE Aggregate Log Change Notices: Not Supported 00:08:15.348 LBA Status Info Alert Notices: Not Supported 00:08:15.348 EGE Aggregate Log Change Notices: Not Supported 00:08:15.348 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.348 Zone Descriptor Change Notices: Not Supported 00:08:15.348 Discovery Log Change Notices: Not Supported 00:08:15.348 Controller Attributes 00:08:15.348 128-bit Host Identifier: Not Supported 00:08:15.348 Non-Operational Permissive Mode: Not Supported 00:08:15.348 NVM Sets: Not Supported 00:08:15.348 Read Recovery Levels: Not Supported 00:08:15.348 Endurance Groups: Not Supported 00:08:15.348 Predictable Latency Mode: Not Supported 00:08:15.348 Traffic Based Keep ALive: Not Supported 00:08:15.348 Namespace Granularity: Not Supported 00:08:15.348 SQ Associations: Not Supported 00:08:15.348 UUID List: Not Supported 00:08:15.348 Multi-Domain Subsystem: Not Supported 00:08:15.348 Fixed Capacity Management: Not Supported 00:08:15.348 Variable Capacity Management: Not Supported 00:08:15.348 Delete Endurance Group: Not Supported 00:08:15.348 Delete NVM Set: Not Supported 00:08:15.348 Extended LBA Formats Supported: Supported 00:08:15.348 Flexible Data Placement Supported: Not Supported 00:08:15.348 00:08:15.348 Controller Memory Buffer Support 00:08:15.348 ================================ 00:08:15.348 Supported: No 00:08:15.348 00:08:15.348 Persistent Memory Region Support 00:08:15.348 ================================ 00:08:15.348 Supported: No 00:08:15.348 00:08:15.348 Admin Command Set Attributes 00:08:15.348 ============================ 00:08:15.348 Security Send/Receive: Not Supported 00:08:15.348 Format NVM: Supported 00:08:15.348 Firmware Activate/Download: Not Supported 00:08:15.348 Namespace Management: Supported 00:08:15.348 Device Self-Test: Not Supported 00:08:15.348 Directives: Supported 00:08:15.348 NVMe-MI: Not Supported 00:08:15.348 Virtualization Management: Not Supported 00:08:15.348 Doorbell Buffer Config: Supported 00:08:15.348 Get LBA Status Capability: Not Supported 00:08:15.348 Command & Feature Lockdown Capability: Not Supported 00:08:15.348 Abort Command Limit: 4 00:08:15.348 Async Event Request Limit: 4 00:08:15.348 Number of Firmware Slots: N/A 00:08:15.348 Firmware Slot 1 Read-Only: N/A 00:08:15.348 Firmware Activation Without Reset: N/A 00:08:15.348 Multiple Update Detection Support: N/A 00:08:15.348 Firmware Update Granularity: No Information Provided 00:08:15.348 Per-Namespace SMART Log: Yes 00:08:15.348 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.348 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:15.348 Command Effects Log Page: Supported 00:08:15.348 Get Log Page Extended Data: Supported 00:08:15.348 Telemetry Log Pages: Not Supported 00:08:15.348 Persistent Event Log Pages: Not Supported 00:08:15.348 Supported Log Pages Log Page: May Support 00:08:15.348 Commands Supported & Effects Log Page: Not Supported 00:08:15.348 Feature Identifiers & Effects Log Page:May Support 00:08:15.348 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.348 Data Area 4 for Telemetry Log: Not Supported 00:08:15.348 Error Log Page Entries Supported: 1 00:08:15.348 Keep Alive: Not Supported 00:08:15.348 00:08:15.348 NVM Command Set Attributes 00:08:15.348 ========================== 00:08:15.348 Submission Queue Entry Size 00:08:15.348 Max: 64 00:08:15.348 Min: 64 00:08:15.348 Completion Queue Entry Size 00:08:15.348 Max: 16 00:08:15.348 Min: 16 00:08:15.348 Number of Namespaces: 256 00:08:15.348 Compare Command: Supported 00:08:15.348 Write Uncorrectable Command: Not Supported 00:08:15.348 Dataset Management Command: Supported 00:08:15.348 Write Zeroes Command: Supported 00:08:15.348 Set Features Save Field: Supported 00:08:15.348 Reservations: Not Supported 00:08:15.348 Timestamp: Supported 00:08:15.348 Copy: Supported 00:08:15.348 Volatile Write Cache: Present 00:08:15.348 Atomic Write Unit (Normal): 1 00:08:15.348 Atomic Write Unit (PFail): 1 00:08:15.348 Atomic Compare & Write Unit: 1 00:08:15.348 Fused Compare & Write: Not Supported 00:08:15.348 Scatter-Gather List 00:08:15.348 SGL Command Set: Supported 00:08:15.348 SGL Keyed: Not Supported 00:08:15.348 SGL Bit Bucket Descriptor: Not Supported 00:08:15.348 SGL Metadata Pointer: Not Supported 00:08:15.349 Oversized SGL: Not Supported 00:08:15.349 SGL Metadata Address: Not Supported 00:08:15.349 SGL Offset: Not Supported 00:08:15.349 Transport SGL Data Block: Not Supported 00:08:15.349 Replay Protected Memory Block: Not Supported 00:08:15.349 00:08:15.349 Firmware Slot Information 00:08:15.349 ========================= 00:08:15.349 Active slot: 1 00:08:15.349 Slot 1 Firmware Revision: 1.0 00:08:15.349 00:08:15.349 00:08:15.349 Commands Supported and Effects 00:08:15.349 ============================== 00:08:15.349 Admin Commands 00:08:15.349 -------------- 00:08:15.349 Delete I/O Submission Queue (00h): Supported 00:08:15.349 Create I/O Submission Queue (01h): Supported 00:08:15.349 Get Log Page (02h): Supported 00:08:15.349 Delete I/O Completion Queue (04h): Supported 00:08:15.349 Create I/O Completion Queue (05h): Supported 00:08:15.349 Identify (06h): Supported 00:08:15.349 Abort (08h): Supported 00:08:15.349 Set Features (09h): Supported 00:08:15.349 Get Features (0Ah): Supported 00:08:15.349 Asynchronous Event Request (0Ch): Supported 00:08:15.349 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.349 Directive Send (19h): Supported 00:08:15.349 Directive Receive (1Ah): Supported 00:08:15.349 Virtualization Management (1Ch): Supported 00:08:15.349 Doorbell Buffer Config (7Ch): Supported 00:08:15.349 Format NVM (80h): Supported LBA-Change 00:08:15.349 I/O Commands 00:08:15.349 ------------ 00:08:15.349 Flush (00h): Supported LBA-Change 00:08:15.349 Write (01h): Supported LBA-Change 00:08:15.349 Read (02h): Supported 00:08:15.349 Compare (05h): Supported 00:08:15.349 Write Zeroes (08h): Supported LBA-Change 00:08:15.349 Dataset Management (09h): Supported LBA-Change 00:08:15.349 Unknown (0Ch): Supported 00:08:15.349 Unknown (12h): Supported 00:08:15.349 Copy (19h): Supported LBA-Change 00:08:15.349 Unknown (1Dh): Supported LBA-Change 00:08:15.349 00:08:15.349 Error Log 00:08:15.349 ========= 00:08:15.349 00:08:15.349 Arbitration 00:08:15.349 =========== 00:08:15.349 Arbitration Burst: no limit 00:08:15.349 00:08:15.349 Power Management 00:08:15.349 ================ 00:08:15.349 Number of Power States: 1 00:08:15.349 Current Power State: Power State #0 00:08:15.349 Power State #0: 00:08:15.349 Max Power: 25.00 W 00:08:15.349 Non-Operational State: Operational 00:08:15.349 Entry Latency: 16 microseconds 00:08:15.349 Exit Latency: 4 microseconds 00:08:15.349 Relative Read Throughput: 0 00:08:15.349 Relative Read Latency: 0 00:08:15.349 Relative Write Throughput: 0 00:08:15.349 Relative Write Latency: 0 00:08:15.349 Idle Power: Not Reported 00:08:15.349 Active Power: Not Reported 00:08:15.349 Non-Operational Permissive Mode: Not Supported 00:08:15.349 00:08:15.349 Health Information 00:08:15.349 ================== 00:08:15.349 Critical Warnings: 00:08:15.349 Available Spare Space: OK 00:08:15.349 Temperature: OK 00:08:15.349 Device Reliability: OK 00:08:15.349 Read Only: No 00:08:15.349 Volatile Memory Backup: OK 00:08:15.349 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.349 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.349 Available Spare: 0% 00:08:15.349 Available Spare Threshold: 0% 00:08:15.349 Life Percentage Used: 0% 00:08:15.349 Data Units Read: 1021 00:08:15.349 Data Units Written: 888 00:08:15.349 Host Read Commands: 53942 00:08:15.349 Host Write Commands: 52731 00:08:15.349 Controller Busy Time: 0 minutes 00:08:15.349 Power Cycles: 0 00:08:15.349 Power On Hours: 0 hours 00:08:15.349 Unsafe Shutdowns: 0 00:08:15.349 Unrecoverable Media Errors: 0 00:08:15.349 Lifetime Error Log Entries: 0 00:08:15.349 Warning Temperature Time: 0 minutes 00:08:15.349 Critical Temperature Time: 0 minutes 00:08:15.349 00:08:15.349 Number of Queues 00:08:15.349 ================ 00:08:15.349 Number of I/O Submission Queues: 64 00:08:15.349 Number of I/O Completion Queues: 64 00:08:15.349 00:08:15.349 ZNS Specific Controller Data 00:08:15.349 ============================ 00:08:15.349 Zone Append Size Limit: 0 00:08:15.349 00:08:15.349 00:08:15.349 Active Namespaces 00:08:15.349 ================= 00:08:15.349 Namespace ID:1 00:08:15.349 Error Recovery Timeout: Unlimited 00:08:15.349 Command Set Identifier: NVM (00h) 00:08:15.349 Deallocate: Supported 00:08:15.349 Deallocated/Unwritten Error: Supported 00:08:15.349 Deallocated Read Value: All 0x00 00:08:15.349 Deallocate in Write Zeroes: Not Supported 00:08:15.349 Deallocated Guard Field: 0xFFFF 00:08:15.349 Flush: Supported 00:08:15.349 Reservation: Not Supported 00:08:15.349 Namespace Sharing Capabilities: Private 00:08:15.349 Size (in LBAs): 1310720 (5GiB) 00:08:15.349 Capacity (in LBAs): 1310720 (5GiB) 00:08:15.349 Utilization (in LBAs): 1310720 (5GiB) 00:08:15.349 Thin Provisioning: Not Supported 00:08:15.349 Per-NS Atomic Units: No 00:08:15.349 Maximum Single Source Range Length: 128 00:08:15.349 Maximum Copy Length: 128 00:08:15.349 Maximum Source Range Count: 128 00:08:15.349 NGUID/EUI64 Never Reused: No 00:08:15.349 Namespace Write Protected: No 00:08:15.349 Number of LBA Formats: 8 00:08:15.349 Current LBA Format: LBA Format #04 00:08:15.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.349 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.349 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.349 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.349 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.349 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.349 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.349 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.349 00:08:15.349 NVM Specific Namespace Data 00:08:15.349 =========================== 00:08:15.349 Logical Block Storage Tag Mask: 0 00:08:15.349 Protection Information Capabilities: 00:08:15.349 16b Guard Protection Information Storage Tag Support: No 00:08:15.349 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.349 Storage Tag Check Read Support: No 00:08:15.349 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.349 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.349 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:15.607 ===================================================== 00:08:15.607 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.607 ===================================================== 00:08:15.607 Controller Capabilities/Features 00:08:15.607 ================================ 00:08:15.607 Vendor ID: 1b36 00:08:15.607 Subsystem Vendor ID: 1af4 00:08:15.607 Serial Number: 12342 00:08:15.607 Model Number: QEMU NVMe Ctrl 00:08:15.607 Firmware Version: 8.0.0 00:08:15.607 Recommended Arb Burst: 6 00:08:15.607 IEEE OUI Identifier: 00 54 52 00:08:15.607 Multi-path I/O 00:08:15.607 May have multiple subsystem ports: No 00:08:15.607 May have multiple controllers: No 00:08:15.607 Associated with SR-IOV VF: No 00:08:15.607 Max Data Transfer Size: 524288 00:08:15.607 Max Number of Namespaces: 256 00:08:15.607 Max Number of I/O Queues: 64 00:08:15.607 NVMe Specification Version (VS): 1.4 00:08:15.607 NVMe Specification Version (Identify): 1.4 00:08:15.607 Maximum Queue Entries: 2048 00:08:15.607 Contiguous Queues Required: Yes 00:08:15.607 Arbitration Mechanisms Supported 00:08:15.607 Weighted Round Robin: Not Supported 00:08:15.607 Vendor Specific: Not Supported 00:08:15.607 Reset Timeout: 7500 ms 00:08:15.607 Doorbell Stride: 4 bytes 00:08:15.607 NVM Subsystem Reset: Not Supported 00:08:15.607 Command Sets Supported 00:08:15.607 NVM Command Set: Supported 00:08:15.607 Boot Partition: Not Supported 00:08:15.607 Memory Page Size Minimum: 4096 bytes 00:08:15.607 Memory Page Size Maximum: 65536 bytes 00:08:15.607 Persistent Memory Region: Not Supported 00:08:15.607 Optional Asynchronous Events Supported 00:08:15.607 Namespace Attribute Notices: Supported 00:08:15.607 Firmware Activation Notices: Not Supported 00:08:15.607 ANA Change Notices: Not Supported 00:08:15.607 PLE Aggregate Log Change Notices: Not Supported 00:08:15.607 LBA Status Info Alert Notices: Not Supported 00:08:15.607 EGE Aggregate Log Change Notices: Not Supported 00:08:15.607 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.607 Zone Descriptor Change Notices: Not Supported 00:08:15.607 Discovery Log Change Notices: Not Supported 00:08:15.607 Controller Attributes 00:08:15.607 128-bit Host Identifier: Not Supported 00:08:15.607 Non-Operational Permissive Mode: Not Supported 00:08:15.607 NVM Sets: Not Supported 00:08:15.607 Read Recovery Levels: Not Supported 00:08:15.607 Endurance Groups: Not Supported 00:08:15.607 Predictable Latency Mode: Not Supported 00:08:15.607 Traffic Based Keep ALive: Not Supported 00:08:15.607 Namespace Granularity: Not Supported 00:08:15.607 SQ Associations: Not Supported 00:08:15.607 UUID List: Not Supported 00:08:15.607 Multi-Domain Subsystem: Not Supported 00:08:15.607 Fixed Capacity Management: Not Supported 00:08:15.607 Variable Capacity Management: Not Supported 00:08:15.607 Delete Endurance Group: Not Supported 00:08:15.607 Delete NVM Set: Not Supported 00:08:15.607 Extended LBA Formats Supported: Supported 00:08:15.607 Flexible Data Placement Supported: Not Supported 00:08:15.607 00:08:15.607 Controller Memory Buffer Support 00:08:15.607 ================================ 00:08:15.607 Supported: No 00:08:15.607 00:08:15.607 Persistent Memory Region Support 00:08:15.607 ================================ 00:08:15.607 Supported: No 00:08:15.607 00:08:15.607 Admin Command Set Attributes 00:08:15.607 ============================ 00:08:15.607 Security Send/Receive: Not Supported 00:08:15.607 Format NVM: Supported 00:08:15.608 Firmware Activate/Download: Not Supported 00:08:15.608 Namespace Management: Supported 00:08:15.608 Device Self-Test: Not Supported 00:08:15.608 Directives: Supported 00:08:15.608 NVMe-MI: Not Supported 00:08:15.608 Virtualization Management: Not Supported 00:08:15.608 Doorbell Buffer Config: Supported 00:08:15.608 Get LBA Status Capability: Not Supported 00:08:15.608 Command & Feature Lockdown Capability: Not Supported 00:08:15.608 Abort Command Limit: 4 00:08:15.608 Async Event Request Limit: 4 00:08:15.608 Number of Firmware Slots: N/A 00:08:15.608 Firmware Slot 1 Read-Only: N/A 00:08:15.608 Firmware Activation Without Reset: N/A 00:08:15.608 Multiple Update Detection Support: N/A 00:08:15.608 Firmware Update Granularity: No Information Provided 00:08:15.608 Per-Namespace SMART Log: Yes 00:08:15.608 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.608 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:15.608 Command Effects Log Page: Supported 00:08:15.608 Get Log Page Extended Data: Supported 00:08:15.608 Telemetry Log Pages: Not Supported 00:08:15.608 Persistent Event Log Pages: Not Supported 00:08:15.608 Supported Log Pages Log Page: May Support 00:08:15.608 Commands Supported & Effects Log Page: Not Supported 00:08:15.608 Feature Identifiers & Effects Log Page:May Support 00:08:15.608 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.608 Data Area 4 for Telemetry Log: Not Supported 00:08:15.608 Error Log Page Entries Supported: 1 00:08:15.608 Keep Alive: Not Supported 00:08:15.608 00:08:15.608 NVM Command Set Attributes 00:08:15.608 ========================== 00:08:15.608 Submission Queue Entry Size 00:08:15.608 Max: 64 00:08:15.608 Min: 64 00:08:15.608 Completion Queue Entry Size 00:08:15.608 Max: 16 00:08:15.608 Min: 16 00:08:15.608 Number of Namespaces: 256 00:08:15.608 Compare Command: Supported 00:08:15.608 Write Uncorrectable Command: Not Supported 00:08:15.608 Dataset Management Command: Supported 00:08:15.608 Write Zeroes Command: Supported 00:08:15.608 Set Features Save Field: Supported 00:08:15.608 Reservations: Not Supported 00:08:15.608 Timestamp: Supported 00:08:15.608 Copy: Supported 00:08:15.608 Volatile Write Cache: Present 00:08:15.608 Atomic Write Unit (Normal): 1 00:08:15.608 Atomic Write Unit (PFail): 1 00:08:15.608 Atomic Compare & Write Unit: 1 00:08:15.608 Fused Compare & Write: Not Supported 00:08:15.608 Scatter-Gather List 00:08:15.608 SGL Command Set: Supported 00:08:15.608 SGL Keyed: Not Supported 00:08:15.608 SGL Bit Bucket Descriptor: Not Supported 00:08:15.608 SGL Metadata Pointer: Not Supported 00:08:15.608 Oversized SGL: Not Supported 00:08:15.608 SGL Metadata Address: Not Supported 00:08:15.608 SGL Offset: Not Supported 00:08:15.608 Transport SGL Data Block: Not Supported 00:08:15.608 Replay Protected Memory Block: Not Supported 00:08:15.608 00:08:15.608 Firmware Slot Information 00:08:15.608 ========================= 00:08:15.608 Active slot: 1 00:08:15.608 Slot 1 Firmware Revision: 1.0 00:08:15.608 00:08:15.608 00:08:15.608 Commands Supported and Effects 00:08:15.608 ============================== 00:08:15.608 Admin Commands 00:08:15.608 -------------- 00:08:15.608 Delete I/O Submission Queue (00h): Supported 00:08:15.608 Create I/O Submission Queue (01h): Supported 00:08:15.608 Get Log Page (02h): Supported 00:08:15.608 Delete I/O Completion Queue (04h): Supported 00:08:15.608 Create I/O Completion Queue (05h): Supported 00:08:15.608 Identify (06h): Supported 00:08:15.608 Abort (08h): Supported 00:08:15.608 Set Features (09h): Supported 00:08:15.608 Get Features (0Ah): Supported 00:08:15.608 Asynchronous Event Request (0Ch): Supported 00:08:15.608 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.608 Directive Send (19h): Supported 00:08:15.608 Directive Receive (1Ah): Supported 00:08:15.608 Virtualization Management (1Ch): Supported 00:08:15.608 Doorbell Buffer Config (7Ch): Supported 00:08:15.608 Format NVM (80h): Supported LBA-Change 00:08:15.608 I/O Commands 00:08:15.608 ------------ 00:08:15.608 Flush (00h): Supported LBA-Change 00:08:15.608 Write (01h): Supported LBA-Change 00:08:15.608 Read (02h): Supported 00:08:15.608 Compare (05h): Supported 00:08:15.608 Write Zeroes (08h): Supported LBA-Change 00:08:15.608 Dataset Management (09h): Supported LBA-Change 00:08:15.608 Unknown (0Ch): Supported 00:08:15.608 Unknown (12h): Supported 00:08:15.608 Copy (19h): Supported LBA-Change 00:08:15.608 Unknown (1Dh): Supported LBA-Change 00:08:15.608 00:08:15.608 Error Log 00:08:15.608 ========= 00:08:15.608 00:08:15.608 Arbitration 00:08:15.608 =========== 00:08:15.608 Arbitration Burst: no limit 00:08:15.608 00:08:15.608 Power Management 00:08:15.608 ================ 00:08:15.608 Number of Power States: 1 00:08:15.608 Current Power State: Power State #0 00:08:15.608 Power State #0: 00:08:15.608 Max Power: 25.00 W 00:08:15.608 Non-Operational State: Operational 00:08:15.608 Entry Latency: 16 microseconds 00:08:15.608 Exit Latency: 4 microseconds 00:08:15.608 Relative Read Throughput: 0 00:08:15.608 Relative Read Latency: 0 00:08:15.608 Relative Write Throughput: 0 00:08:15.608 Relative Write Latency: 0 00:08:15.608 Idle Power: Not Reported 00:08:15.608 Active Power: Not Reported 00:08:15.608 Non-Operational Permissive Mode: Not Supported 00:08:15.608 00:08:15.608 Health Information 00:08:15.608 ================== 00:08:15.608 Critical Warnings: 00:08:15.608 Available Spare Space: OK 00:08:15.608 Temperature: OK 00:08:15.608 Device Reliability: OK 00:08:15.608 Read Only: No 00:08:15.608 Volatile Memory Backup: OK 00:08:15.608 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.608 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.608 Available Spare: 0% 00:08:15.608 Available Spare Threshold: 0% 00:08:15.608 Life Percentage Used: 0% 00:08:15.608 Data Units Read: 2126 00:08:15.608 Data Units Written: 1914 00:08:15.608 Host Read Commands: 110872 00:08:15.608 Host Write Commands: 109141 00:08:15.608 Controller Busy Time: 0 minutes 00:08:15.608 Power Cycles: 0 00:08:15.608 Power On Hours: 0 hours 00:08:15.608 Unsafe Shutdowns: 0 00:08:15.608 Unrecoverable Media Errors: 0 00:08:15.608 Lifetime Error Log Entries: 0 00:08:15.608 Warning Temperature Time: 0 minutes 00:08:15.608 Critical Temperature Time: 0 minutes 00:08:15.608 00:08:15.608 Number of Queues 00:08:15.608 ================ 00:08:15.608 Number of I/O Submission Queues: 64 00:08:15.608 Number of I/O Completion Queues: 64 00:08:15.608 00:08:15.608 ZNS Specific Controller Data 00:08:15.608 ============================ 00:08:15.608 Zone Append Size Limit: 0 00:08:15.608 00:08:15.608 00:08:15.608 Active Namespaces 00:08:15.608 ================= 00:08:15.608 Namespace ID:1 00:08:15.608 Error Recovery Timeout: Unlimited 00:08:15.608 Command Set Identifier: NVM (00h) 00:08:15.608 Deallocate: Supported 00:08:15.608 Deallocated/Unwritten Error: Supported 00:08:15.608 Deallocated Read Value: All 0x00 00:08:15.608 Deallocate in Write Zeroes: Not Supported 00:08:15.608 Deallocated Guard Field: 0xFFFF 00:08:15.608 Flush: Supported 00:08:15.608 Reservation: Not Supported 00:08:15.608 Namespace Sharing Capabilities: Private 00:08:15.608 Size (in LBAs): 1048576 (4GiB) 00:08:15.608 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.608 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.608 Thin Provisioning: Not Supported 00:08:15.608 Per-NS Atomic Units: No 00:08:15.608 Maximum Single Source Range Length: 128 00:08:15.608 Maximum Copy Length: 128 00:08:15.608 Maximum Source Range Count: 128 00:08:15.608 NGUID/EUI64 Never Reused: No 00:08:15.608 Namespace Write Protected: No 00:08:15.608 Number of LBA Formats: 8 00:08:15.608 Current LBA Format: LBA Format #04 00:08:15.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.608 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.608 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.608 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.608 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.608 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.608 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.608 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.608 00:08:15.608 NVM Specific Namespace Data 00:08:15.608 =========================== 00:08:15.608 Logical Block Storage Tag Mask: 0 00:08:15.608 Protection Information Capabilities: 00:08:15.608 16b Guard Protection Information Storage Tag Support: No 00:08:15.608 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.608 Storage Tag Check Read Support: No 00:08:15.608 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Namespace ID:2 00:08:15.609 Error Recovery Timeout: Unlimited 00:08:15.609 Command Set Identifier: NVM (00h) 00:08:15.609 Deallocate: Supported 00:08:15.609 Deallocated/Unwritten Error: Supported 00:08:15.609 Deallocated Read Value: All 0x00 00:08:15.609 Deallocate in Write Zeroes: Not Supported 00:08:15.609 Deallocated Guard Field: 0xFFFF 00:08:15.609 Flush: Supported 00:08:15.609 Reservation: Not Supported 00:08:15.609 Namespace Sharing Capabilities: Private 00:08:15.609 Size (in LBAs): 1048576 (4GiB) 00:08:15.609 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.609 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.609 Thin Provisioning: Not Supported 00:08:15.609 Per-NS Atomic Units: No 00:08:15.609 Maximum Single Source Range Length: 128 00:08:15.609 Maximum Copy Length: 128 00:08:15.609 Maximum Source Range Count: 128 00:08:15.609 NGUID/EUI64 Never Reused: No 00:08:15.609 Namespace Write Protected: No 00:08:15.609 Number of LBA Formats: 8 00:08:15.609 Current LBA Format: LBA Format #04 00:08:15.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.609 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.609 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.609 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.609 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.609 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.609 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.609 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.609 00:08:15.609 NVM Specific Namespace Data 00:08:15.609 =========================== 00:08:15.609 Logical Block Storage Tag Mask: 0 00:08:15.609 Protection Information Capabilities: 00:08:15.609 16b Guard Protection Information Storage Tag Support: No 00:08:15.609 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.609 Storage Tag Check Read Support: No 00:08:15.609 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Namespace ID:3 00:08:15.609 Error Recovery Timeout: Unlimited 00:08:15.609 Command Set Identifier: NVM (00h) 00:08:15.609 Deallocate: Supported 00:08:15.609 Deallocated/Unwritten Error: Supported 00:08:15.609 Deallocated Read Value: All 0x00 00:08:15.609 Deallocate in Write Zeroes: Not Supported 00:08:15.609 Deallocated Guard Field: 0xFFFF 00:08:15.609 Flush: Supported 00:08:15.609 Reservation: Not Supported 00:08:15.609 Namespace Sharing Capabilities: Private 00:08:15.609 Size (in LBAs): 1048576 (4GiB) 00:08:15.609 Capacity (in LBAs): 1048576 (4GiB) 00:08:15.609 Utilization (in LBAs): 1048576 (4GiB) 00:08:15.609 Thin Provisioning: Not Supported 00:08:15.609 Per-NS Atomic Units: No 00:08:15.609 Maximum Single Source Range Length: 128 00:08:15.609 Maximum Copy Length: 128 00:08:15.609 Maximum Source Range Count: 128 00:08:15.609 NGUID/EUI64 Never Reused: No 00:08:15.609 Namespace Write Protected: No 00:08:15.609 Number of LBA Formats: 8 00:08:15.609 Current LBA Format: LBA Format #04 00:08:15.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.609 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.609 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.609 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.609 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.609 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.609 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.609 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.609 00:08:15.609 NVM Specific Namespace Data 00:08:15.609 =========================== 00:08:15.609 Logical Block Storage Tag Mask: 0 00:08:15.609 Protection Information Capabilities: 00:08:15.609 16b Guard Protection Information Storage Tag Support: No 00:08:15.609 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.609 Storage Tag Check Read Support: No 00:08:15.609 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.609 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:15.609 11:39:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:15.867 ===================================================== 00:08:15.867 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.867 ===================================================== 00:08:15.867 Controller Capabilities/Features 00:08:15.867 ================================ 00:08:15.867 Vendor ID: 1b36 00:08:15.867 Subsystem Vendor ID: 1af4 00:08:15.867 Serial Number: 12343 00:08:15.867 Model Number: QEMU NVMe Ctrl 00:08:15.867 Firmware Version: 8.0.0 00:08:15.867 Recommended Arb Burst: 6 00:08:15.867 IEEE OUI Identifier: 00 54 52 00:08:15.867 Multi-path I/O 00:08:15.867 May have multiple subsystem ports: No 00:08:15.867 May have multiple controllers: Yes 00:08:15.867 Associated with SR-IOV VF: No 00:08:15.867 Max Data Transfer Size: 524288 00:08:15.867 Max Number of Namespaces: 256 00:08:15.867 Max Number of I/O Queues: 64 00:08:15.867 NVMe Specification Version (VS): 1.4 00:08:15.867 NVMe Specification Version (Identify): 1.4 00:08:15.867 Maximum Queue Entries: 2048 00:08:15.867 Contiguous Queues Required: Yes 00:08:15.867 Arbitration Mechanisms Supported 00:08:15.867 Weighted Round Robin: Not Supported 00:08:15.867 Vendor Specific: Not Supported 00:08:15.867 Reset Timeout: 7500 ms 00:08:15.867 Doorbell Stride: 4 bytes 00:08:15.867 NVM Subsystem Reset: Not Supported 00:08:15.867 Command Sets Supported 00:08:15.867 NVM Command Set: Supported 00:08:15.867 Boot Partition: Not Supported 00:08:15.867 Memory Page Size Minimum: 4096 bytes 00:08:15.867 Memory Page Size Maximum: 65536 bytes 00:08:15.867 Persistent Memory Region: Not Supported 00:08:15.867 Optional Asynchronous Events Supported 00:08:15.867 Namespace Attribute Notices: Supported 00:08:15.867 Firmware Activation Notices: Not Supported 00:08:15.867 ANA Change Notices: Not Supported 00:08:15.867 PLE Aggregate Log Change Notices: Not Supported 00:08:15.867 LBA Status Info Alert Notices: Not Supported 00:08:15.867 EGE Aggregate Log Change Notices: Not Supported 00:08:15.867 Normal NVM Subsystem Shutdown event: Not Supported 00:08:15.867 Zone Descriptor Change Notices: Not Supported 00:08:15.867 Discovery Log Change Notices: Not Supported 00:08:15.867 Controller Attributes 00:08:15.867 128-bit Host Identifier: Not Supported 00:08:15.867 Non-Operational Permissive Mode: Not Supported 00:08:15.867 NVM Sets: Not Supported 00:08:15.867 Read Recovery Levels: Not Supported 00:08:15.867 Endurance Groups: Supported 00:08:15.867 Predictable Latency Mode: Not Supported 00:08:15.867 Traffic Based Keep ALive: Not Supported 00:08:15.867 Namespace Granularity: Not Supported 00:08:15.867 SQ Associations: Not Supported 00:08:15.867 UUID List: Not Supported 00:08:15.867 Multi-Domain Subsystem: Not Supported 00:08:15.867 Fixed Capacity Management: Not Supported 00:08:15.867 Variable Capacity Management: Not Supported 00:08:15.867 Delete Endurance Group: Not Supported 00:08:15.867 Delete NVM Set: Not Supported 00:08:15.867 Extended LBA Formats Supported: Supported 00:08:15.867 Flexible Data Placement Supported: Supported 00:08:15.867 00:08:15.867 Controller Memory Buffer Support 00:08:15.867 ================================ 00:08:15.867 Supported: No 00:08:15.867 00:08:15.867 Persistent Memory Region Support 00:08:15.867 ================================ 00:08:15.867 Supported: No 00:08:15.867 00:08:15.867 Admin Command Set Attributes 00:08:15.867 ============================ 00:08:15.867 Security Send/Receive: Not Supported 00:08:15.867 Format NVM: Supported 00:08:15.867 Firmware Activate/Download: Not Supported 00:08:15.867 Namespace Management: Supported 00:08:15.867 Device Self-Test: Not Supported 00:08:15.867 Directives: Supported 00:08:15.867 NVMe-MI: Not Supported 00:08:15.867 Virtualization Management: Not Supported 00:08:15.868 Doorbell Buffer Config: Supported 00:08:15.868 Get LBA Status Capability: Not Supported 00:08:15.868 Command & Feature Lockdown Capability: Not Supported 00:08:15.868 Abort Command Limit: 4 00:08:15.868 Async Event Request Limit: 4 00:08:15.868 Number of Firmware Slots: N/A 00:08:15.868 Firmware Slot 1 Read-Only: N/A 00:08:15.868 Firmware Activation Without Reset: N/A 00:08:15.868 Multiple Update Detection Support: N/A 00:08:15.868 Firmware Update Granularity: No Information Provided 00:08:15.868 Per-Namespace SMART Log: Yes 00:08:15.868 Asymmetric Namespace Access Log Page: Not Supported 00:08:15.868 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:15.868 Command Effects Log Page: Supported 00:08:15.868 Get Log Page Extended Data: Supported 00:08:15.868 Telemetry Log Pages: Not Supported 00:08:15.868 Persistent Event Log Pages: Not Supported 00:08:15.868 Supported Log Pages Log Page: May Support 00:08:15.868 Commands Supported & Effects Log Page: Not Supported 00:08:15.868 Feature Identifiers & Effects Log Page:May Support 00:08:15.868 NVMe-MI Commands & Effects Log Page: May Support 00:08:15.868 Data Area 4 for Telemetry Log: Not Supported 00:08:15.868 Error Log Page Entries Supported: 1 00:08:15.868 Keep Alive: Not Supported 00:08:15.868 00:08:15.868 NVM Command Set Attributes 00:08:15.868 ========================== 00:08:15.868 Submission Queue Entry Size 00:08:15.868 Max: 64 00:08:15.868 Min: 64 00:08:15.868 Completion Queue Entry Size 00:08:15.868 Max: 16 00:08:15.868 Min: 16 00:08:15.868 Number of Namespaces: 256 00:08:15.868 Compare Command: Supported 00:08:15.868 Write Uncorrectable Command: Not Supported 00:08:15.868 Dataset Management Command: Supported 00:08:15.868 Write Zeroes Command: Supported 00:08:15.868 Set Features Save Field: Supported 00:08:15.868 Reservations: Not Supported 00:08:15.868 Timestamp: Supported 00:08:15.868 Copy: Supported 00:08:15.868 Volatile Write Cache: Present 00:08:15.868 Atomic Write Unit (Normal): 1 00:08:15.868 Atomic Write Unit (PFail): 1 00:08:15.868 Atomic Compare & Write Unit: 1 00:08:15.868 Fused Compare & Write: Not Supported 00:08:15.868 Scatter-Gather List 00:08:15.868 SGL Command Set: Supported 00:08:15.868 SGL Keyed: Not Supported 00:08:15.868 SGL Bit Bucket Descriptor: Not Supported 00:08:15.868 SGL Metadata Pointer: Not Supported 00:08:15.868 Oversized SGL: Not Supported 00:08:15.868 SGL Metadata Address: Not Supported 00:08:15.868 SGL Offset: Not Supported 00:08:15.868 Transport SGL Data Block: Not Supported 00:08:15.868 Replay Protected Memory Block: Not Supported 00:08:15.868 00:08:15.868 Firmware Slot Information 00:08:15.868 ========================= 00:08:15.868 Active slot: 1 00:08:15.868 Slot 1 Firmware Revision: 1.0 00:08:15.868 00:08:15.868 00:08:15.868 Commands Supported and Effects 00:08:15.868 ============================== 00:08:15.868 Admin Commands 00:08:15.868 -------------- 00:08:15.868 Delete I/O Submission Queue (00h): Supported 00:08:15.868 Create I/O Submission Queue (01h): Supported 00:08:15.868 Get Log Page (02h): Supported 00:08:15.868 Delete I/O Completion Queue (04h): Supported 00:08:15.868 Create I/O Completion Queue (05h): Supported 00:08:15.868 Identify (06h): Supported 00:08:15.868 Abort (08h): Supported 00:08:15.868 Set Features (09h): Supported 00:08:15.868 Get Features (0Ah): Supported 00:08:15.868 Asynchronous Event Request (0Ch): Supported 00:08:15.868 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:15.868 Directive Send (19h): Supported 00:08:15.868 Directive Receive (1Ah): Supported 00:08:15.868 Virtualization Management (1Ch): Supported 00:08:15.868 Doorbell Buffer Config (7Ch): Supported 00:08:15.868 Format NVM (80h): Supported LBA-Change 00:08:15.868 I/O Commands 00:08:15.868 ------------ 00:08:15.868 Flush (00h): Supported LBA-Change 00:08:15.868 Write (01h): Supported LBA-Change 00:08:15.868 Read (02h): Supported 00:08:15.868 Compare (05h): Supported 00:08:15.868 Write Zeroes (08h): Supported LBA-Change 00:08:15.868 Dataset Management (09h): Supported LBA-Change 00:08:15.868 Unknown (0Ch): Supported 00:08:15.868 Unknown (12h): Supported 00:08:15.868 Copy (19h): Supported LBA-Change 00:08:15.868 Unknown (1Dh): Supported LBA-Change 00:08:15.868 00:08:15.868 Error Log 00:08:15.868 ========= 00:08:15.868 00:08:15.868 Arbitration 00:08:15.868 =========== 00:08:15.868 Arbitration Burst: no limit 00:08:15.868 00:08:15.868 Power Management 00:08:15.868 ================ 00:08:15.868 Number of Power States: 1 00:08:15.868 Current Power State: Power State #0 00:08:15.868 Power State #0: 00:08:15.868 Max Power: 25.00 W 00:08:15.868 Non-Operational State: Operational 00:08:15.868 Entry Latency: 16 microseconds 00:08:15.868 Exit Latency: 4 microseconds 00:08:15.868 Relative Read Throughput: 0 00:08:15.868 Relative Read Latency: 0 00:08:15.868 Relative Write Throughput: 0 00:08:15.868 Relative Write Latency: 0 00:08:15.868 Idle Power: Not Reported 00:08:15.868 Active Power: Not Reported 00:08:15.868 Non-Operational Permissive Mode: Not Supported 00:08:15.868 00:08:15.868 Health Information 00:08:15.868 ================== 00:08:15.868 Critical Warnings: 00:08:15.868 Available Spare Space: OK 00:08:15.868 Temperature: OK 00:08:15.868 Device Reliability: OK 00:08:15.868 Read Only: No 00:08:15.868 Volatile Memory Backup: OK 00:08:15.868 Current Temperature: 323 Kelvin (50 Celsius) 00:08:15.868 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:15.868 Available Spare: 0% 00:08:15.868 Available Spare Threshold: 0% 00:08:15.868 Life Percentage Used: 0% 00:08:15.868 Data Units Read: 801 00:08:15.868 Data Units Written: 730 00:08:15.868 Host Read Commands: 37773 00:08:15.868 Host Write Commands: 37196 00:08:15.868 Controller Busy Time: 0 minutes 00:08:15.868 Power Cycles: 0 00:08:15.868 Power On Hours: 0 hours 00:08:15.868 Unsafe Shutdowns: 0 00:08:15.868 Unrecoverable Media Errors: 0 00:08:15.868 Lifetime Error Log Entries: 0 00:08:15.868 Warning Temperature Time: 0 minutes 00:08:15.868 Critical Temperature Time: 0 minutes 00:08:15.868 00:08:15.868 Number of Queues 00:08:15.868 ================ 00:08:15.868 Number of I/O Submission Queues: 64 00:08:15.868 Number of I/O Completion Queues: 64 00:08:15.868 00:08:15.868 ZNS Specific Controller Data 00:08:15.868 ============================ 00:08:15.868 Zone Append Size Limit: 0 00:08:15.868 00:08:15.868 00:08:15.868 Active Namespaces 00:08:15.868 ================= 00:08:15.868 Namespace ID:1 00:08:15.868 Error Recovery Timeout: Unlimited 00:08:15.868 Command Set Identifier: NVM (00h) 00:08:15.868 Deallocate: Supported 00:08:15.868 Deallocated/Unwritten Error: Supported 00:08:15.868 Deallocated Read Value: All 0x00 00:08:15.868 Deallocate in Write Zeroes: Not Supported 00:08:15.868 Deallocated Guard Field: 0xFFFF 00:08:15.868 Flush: Supported 00:08:15.868 Reservation: Not Supported 00:08:15.868 Namespace Sharing Capabilities: Multiple Controllers 00:08:15.868 Size (in LBAs): 262144 (1GiB) 00:08:15.868 Capacity (in LBAs): 262144 (1GiB) 00:08:15.868 Utilization (in LBAs): 262144 (1GiB) 00:08:15.868 Thin Provisioning: Not Supported 00:08:15.868 Per-NS Atomic Units: No 00:08:15.868 Maximum Single Source Range Length: 128 00:08:15.868 Maximum Copy Length: 128 00:08:15.868 Maximum Source Range Count: 128 00:08:15.868 NGUID/EUI64 Never Reused: No 00:08:15.868 Namespace Write Protected: No 00:08:15.868 Endurance group ID: 1 00:08:15.868 Number of LBA Formats: 8 00:08:15.868 Current LBA Format: LBA Format #04 00:08:15.868 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:15.868 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:15.868 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:15.868 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:15.868 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:15.868 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:15.868 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:15.868 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:15.868 00:08:15.868 Get Feature FDP: 00:08:15.868 ================ 00:08:15.868 Enabled: Yes 00:08:15.868 FDP configuration index: 0 00:08:15.868 00:08:15.868 FDP configurations log page 00:08:15.868 =========================== 00:08:15.868 Number of FDP configurations: 1 00:08:15.868 Version: 0 00:08:15.868 Size: 112 00:08:15.868 FDP Configuration Descriptor: 0 00:08:15.868 Descriptor Size: 96 00:08:15.868 Reclaim Group Identifier format: 2 00:08:15.868 FDP Volatile Write Cache: Not Present 00:08:15.868 FDP Configuration: Valid 00:08:15.868 Vendor Specific Size: 0 00:08:15.868 Number of Reclaim Groups: 2 00:08:15.869 Number of Recalim Unit Handles: 8 00:08:15.869 Max Placement Identifiers: 128 00:08:15.869 Number of Namespaces Suppprted: 256 00:08:15.869 Reclaim unit Nominal Size: 6000000 bytes 00:08:15.869 Estimated Reclaim Unit Time Limit: Not Reported 00:08:15.869 RUH Desc #000: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #001: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #002: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #003: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #004: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #005: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #006: RUH Type: Initially Isolated 00:08:15.869 RUH Desc #007: RUH Type: Initially Isolated 00:08:15.869 00:08:15.869 FDP reclaim unit handle usage log page 00:08:15.869 ====================================== 00:08:15.869 Number of Reclaim Unit Handles: 8 00:08:15.869 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:15.869 RUH Usage Desc #001: RUH Attributes: Unused 00:08:15.869 RUH Usage Desc #002: RUH Attributes: Unused 00:08:15.869 RUH Usage Desc #003: RUH Attributes: Unused 00:08:15.869 RUH Usage Desc #004: RUH Attributes: Unused 00:08:15.869 RUH Usage Desc #005: RUH Attributes: Unused 00:08:15.869 RUH Usage Desc #006: RUH Attributes: Unused 00:08:15.869 RUH Usage Desc #007: RUH Attributes: Unused 00:08:15.869 00:08:15.869 FDP statistics log page 00:08:15.869 ======================= 00:08:15.869 Host bytes with metadata written: 464887808 00:08:15.869 Media bytes with metadata written: 464941056 00:08:15.869 Media bytes erased: 0 00:08:15.869 00:08:15.869 FDP events log page 00:08:15.869 =================== 00:08:15.869 Number of FDP events: 0 00:08:15.869 00:08:15.869 NVM Specific Namespace Data 00:08:15.869 =========================== 00:08:15.869 Logical Block Storage Tag Mask: 0 00:08:15.869 Protection Information Capabilities: 00:08:15.869 16b Guard Protection Information Storage Tag Support: No 00:08:15.869 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:15.869 Storage Tag Check Read Support: No 00:08:15.869 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:15.869 00:08:15.869 real 0m0.925s 00:08:15.869 user 0m0.322s 00:08:15.869 sys 0m0.411s 00:08:15.869 11:39:29 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.869 11:39:29 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:15.869 ************************************ 00:08:15.869 END TEST nvme_identify 00:08:15.869 ************************************ 00:08:15.869 11:39:29 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:15.869 11:39:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.869 11:39:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.869 11:39:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.869 ************************************ 00:08:15.869 START TEST nvme_perf 00:08:15.869 ************************************ 00:08:15.869 11:39:29 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:15.869 11:39:29 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:17.255 Initializing NVMe Controllers 00:08:17.255 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.255 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.255 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.255 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.255 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:17.255 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:17.255 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:17.255 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:17.255 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:17.255 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:17.255 Initialization complete. Launching workers. 00:08:17.255 ======================================================== 00:08:17.255 Latency(us) 00:08:17.255 Device Information : IOPS MiB/s Average min max 00:08:17.255 PCIE (0000:00:10.0) NSID 1 from core 0: 9300.33 108.99 13772.33 9166.29 29641.08 00:08:17.255 PCIE (0000:00:11.0) NSID 1 from core 0: 9300.33 108.99 13768.36 8354.91 29177.53 00:08:17.255 PCIE (0000:00:13.0) NSID 1 from core 0: 9300.33 108.99 13756.90 6776.93 29815.28 00:08:17.255 PCIE (0000:00:12.0) NSID 1 from core 0: 9300.33 108.99 13744.92 6046.79 29577.00 00:08:17.255 PCIE (0000:00:12.0) NSID 2 from core 0: 9300.33 108.99 13733.06 5252.39 29869.75 00:08:17.255 PCIE (0000:00:12.0) NSID 3 from core 0: 9364.03 109.73 13627.78 4456.16 23181.40 00:08:17.255 ======================================================== 00:08:17.255 Total : 55865.65 654.68 13733.77 4456.16 29869.75 00:08:17.255 00:08:17.255 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:17.255 ================================================================================= 00:08:17.255 1.00000% : 10838.646us 00:08:17.255 10.00000% : 11695.655us 00:08:17.255 25.00000% : 12351.015us 00:08:17.255 50.00000% : 13510.498us 00:08:17.255 75.00000% : 14720.394us 00:08:17.255 90.00000% : 16131.938us 00:08:17.255 95.00000% : 17039.360us 00:08:17.255 98.00000% : 18350.080us 00:08:17.255 99.00000% : 22080.591us 00:08:17.255 99.50000% : 28835.840us 00:08:17.255 99.90000% : 29642.437us 00:08:17.255 99.99000% : 29642.437us 00:08:17.255 99.99900% : 29642.437us 00:08:17.255 99.99990% : 29642.437us 00:08:17.255 99.99999% : 29642.437us 00:08:17.255 00:08:17.255 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:17.255 ================================================================================= 00:08:17.255 1.00000% : 10737.822us 00:08:17.255 10.00000% : 11695.655us 00:08:17.255 25.00000% : 12401.428us 00:08:17.255 50.00000% : 13510.498us 00:08:17.255 75.00000% : 14619.569us 00:08:17.255 90.00000% : 16232.763us 00:08:17.255 95.00000% : 17241.009us 00:08:17.255 98.00000% : 18249.255us 00:08:17.255 99.00000% : 22282.240us 00:08:17.255 99.50000% : 28634.191us 00:08:17.255 99.90000% : 29037.489us 00:08:17.255 99.99000% : 29239.138us 00:08:17.255 99.99900% : 29239.138us 00:08:17.255 99.99990% : 29239.138us 00:08:17.255 99.99999% : 29239.138us 00:08:17.255 00:08:17.255 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:17.255 ================================================================================= 00:08:17.255 1.00000% : 10737.822us 00:08:17.255 10.00000% : 11695.655us 00:08:17.255 25.00000% : 12351.015us 00:08:17.255 50.00000% : 13510.498us 00:08:17.255 75.00000% : 14720.394us 00:08:17.255 90.00000% : 15930.289us 00:08:17.255 95.00000% : 17442.658us 00:08:17.255 98.00000% : 18753.378us 00:08:17.255 99.00000% : 23088.837us 00:08:17.255 99.50000% : 29239.138us 00:08:17.255 99.90000% : 29844.086us 00:08:17.255 99.99000% : 29844.086us 00:08:17.255 99.99900% : 29844.086us 00:08:17.255 99.99990% : 29844.086us 00:08:17.255 99.99999% : 29844.086us 00:08:17.255 00:08:17.255 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:17.255 ================================================================================= 00:08:17.255 1.00000% : 10737.822us 00:08:17.255 10.00000% : 11746.068us 00:08:17.255 25.00000% : 12351.015us 00:08:17.255 50.00000% : 13510.498us 00:08:17.255 75.00000% : 14720.394us 00:08:17.255 90.00000% : 15930.289us 00:08:17.255 95.00000% : 17241.009us 00:08:17.255 98.00000% : 18450.905us 00:08:17.255 99.00000% : 22685.538us 00:08:17.255 99.50000% : 29037.489us 00:08:17.255 99.90000% : 29440.788us 00:08:17.255 99.99000% : 29642.437us 00:08:17.255 99.99900% : 29642.437us 00:08:17.255 99.99990% : 29642.437us 00:08:17.255 99.99999% : 29642.437us 00:08:17.255 00:08:17.255 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:17.255 ================================================================================= 00:08:17.255 1.00000% : 10586.585us 00:08:17.255 10.00000% : 11695.655us 00:08:17.255 25.00000% : 12401.428us 00:08:17.256 50.00000% : 13510.498us 00:08:17.256 75.00000% : 14720.394us 00:08:17.256 90.00000% : 15930.289us 00:08:17.256 95.00000% : 16938.535us 00:08:17.256 98.00000% : 18652.554us 00:08:17.256 99.00000% : 22483.889us 00:08:17.256 99.50000% : 29239.138us 00:08:17.256 99.90000% : 29844.086us 00:08:17.256 99.99000% : 30045.735us 00:08:17.256 99.99900% : 30045.735us 00:08:17.256 99.99990% : 30045.735us 00:08:17.256 99.99999% : 30045.735us 00:08:17.256 00:08:17.256 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:17.256 ================================================================================= 00:08:17.256 1.00000% : 10132.874us 00:08:17.256 10.00000% : 11695.655us 00:08:17.256 25.00000% : 12351.015us 00:08:17.256 50.00000% : 13510.498us 00:08:17.256 75.00000% : 14720.394us 00:08:17.256 90.00000% : 16031.114us 00:08:17.256 95.00000% : 16837.711us 00:08:17.256 98.00000% : 17644.308us 00:08:17.256 99.00000% : 18854.203us 00:08:17.256 99.50000% : 22483.889us 00:08:17.256 99.90000% : 23088.837us 00:08:17.256 99.99000% : 23189.662us 00:08:17.256 99.99900% : 23189.662us 00:08:17.256 99.99990% : 23189.662us 00:08:17.256 99.99999% : 23189.662us 00:08:17.256 00:08:17.256 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:17.256 ============================================================================== 00:08:17.256 Range in us Cumulative IO count 00:08:17.256 9124.628 - 9175.040: 0.0107% ( 1) 00:08:17.256 9175.040 - 9225.452: 0.0642% ( 5) 00:08:17.256 9225.452 - 9275.865: 0.1070% ( 4) 00:08:17.256 9275.865 - 9326.277: 0.1391% ( 3) 00:08:17.256 9326.277 - 9376.689: 0.1712% ( 3) 00:08:17.256 9376.689 - 9427.102: 0.2247% ( 5) 00:08:17.256 9427.102 - 9477.514: 0.2783% ( 5) 00:08:17.256 9477.514 - 9527.926: 0.2997% ( 2) 00:08:17.256 9527.926 - 9578.338: 0.3532% ( 5) 00:08:17.256 9578.338 - 9628.751: 0.3853% ( 3) 00:08:17.256 9628.751 - 9679.163: 0.4281% ( 4) 00:08:17.256 9679.163 - 9729.575: 0.4816% ( 5) 00:08:17.256 9729.575 - 9779.988: 0.5137% ( 3) 00:08:17.256 9779.988 - 9830.400: 0.5565% ( 4) 00:08:17.256 9830.400 - 9880.812: 0.5886% ( 3) 00:08:17.256 9880.812 - 9931.225: 0.6314% ( 4) 00:08:17.256 9931.225 - 9981.637: 0.6742% ( 4) 00:08:17.256 9981.637 - 10032.049: 0.6849% ( 1) 00:08:17.256 10536.172 - 10586.585: 0.7063% ( 2) 00:08:17.256 10586.585 - 10636.997: 0.7491% ( 4) 00:08:17.256 10636.997 - 10687.409: 0.8241% ( 7) 00:08:17.256 10687.409 - 10737.822: 0.8883% ( 6) 00:08:17.256 10737.822 - 10788.234: 0.9739% ( 8) 00:08:17.256 10788.234 - 10838.646: 1.1237% ( 14) 00:08:17.256 10838.646 - 10889.058: 1.3057% ( 17) 00:08:17.256 10889.058 - 10939.471: 1.6374% ( 31) 00:08:17.256 10939.471 - 10989.883: 1.8729% ( 22) 00:08:17.256 10989.883 - 11040.295: 2.1832% ( 29) 00:08:17.256 11040.295 - 11090.708: 2.4829% ( 28) 00:08:17.256 11090.708 - 11141.120: 2.7825% ( 28) 00:08:17.256 11141.120 - 11191.532: 3.1892% ( 38) 00:08:17.256 11191.532 - 11241.945: 3.7671% ( 54) 00:08:17.256 11241.945 - 11292.357: 4.1738% ( 38) 00:08:17.256 11292.357 - 11342.769: 4.7624% ( 55) 00:08:17.256 11342.769 - 11393.182: 5.3831% ( 58) 00:08:17.256 11393.182 - 11443.594: 6.1644% ( 73) 00:08:17.256 11443.594 - 11494.006: 6.7530% ( 55) 00:08:17.256 11494.006 - 11544.418: 7.8339% ( 101) 00:08:17.256 11544.418 - 11594.831: 8.5830% ( 70) 00:08:17.256 11594.831 - 11645.243: 9.6211% ( 97) 00:08:17.256 11645.243 - 11695.655: 10.3275% ( 66) 00:08:17.256 11695.655 - 11746.068: 11.3014% ( 91) 00:08:17.256 11746.068 - 11796.480: 12.1682% ( 81) 00:08:17.256 11796.480 - 11846.892: 13.2170% ( 98) 00:08:17.256 11846.892 - 11897.305: 14.2658% ( 98) 00:08:17.256 11897.305 - 11947.717: 15.4431% ( 110) 00:08:17.256 11947.717 - 11998.129: 16.5133% ( 100) 00:08:17.256 11998.129 - 12048.542: 17.5407% ( 96) 00:08:17.256 12048.542 - 12098.954: 18.8356% ( 121) 00:08:17.256 12098.954 - 12149.366: 20.0021% ( 109) 00:08:17.256 12149.366 - 12199.778: 21.3506% ( 126) 00:08:17.256 12199.778 - 12250.191: 22.6455% ( 121) 00:08:17.256 12250.191 - 12300.603: 23.7372% ( 102) 00:08:17.256 12300.603 - 12351.015: 25.2676% ( 143) 00:08:17.256 12351.015 - 12401.428: 26.5946% ( 124) 00:08:17.256 12401.428 - 12451.840: 27.7290% ( 106) 00:08:17.256 12451.840 - 12502.252: 29.0775% ( 126) 00:08:17.256 12502.252 - 12552.665: 30.2547% ( 110) 00:08:17.256 12552.665 - 12603.077: 31.3891% ( 106) 00:08:17.256 12603.077 - 12653.489: 32.6520% ( 118) 00:08:17.256 12653.489 - 12703.902: 33.7115% ( 99) 00:08:17.256 12703.902 - 12754.314: 35.0385% ( 124) 00:08:17.256 12754.314 - 12804.726: 36.1836% ( 107) 00:08:17.256 12804.726 - 12855.138: 37.4786% ( 121) 00:08:17.256 12855.138 - 12905.551: 38.6237% ( 107) 00:08:17.256 12905.551 - 13006.375: 40.7106% ( 195) 00:08:17.256 13006.375 - 13107.200: 42.8938% ( 204) 00:08:17.256 13107.200 - 13208.025: 44.7560% ( 174) 00:08:17.256 13208.025 - 13308.849: 46.7038% ( 182) 00:08:17.256 13308.849 - 13409.674: 48.7693% ( 193) 00:08:17.256 13409.674 - 13510.498: 50.8241% ( 192) 00:08:17.256 13510.498 - 13611.323: 53.1250% ( 215) 00:08:17.256 13611.323 - 13712.148: 55.2333% ( 197) 00:08:17.256 13712.148 - 13812.972: 57.1383% ( 178) 00:08:17.256 13812.972 - 13913.797: 59.2680% ( 199) 00:08:17.256 13913.797 - 14014.622: 61.6759% ( 225) 00:08:17.256 14014.622 - 14115.446: 63.9127% ( 209) 00:08:17.256 14115.446 - 14216.271: 66.1601% ( 210) 00:08:17.256 14216.271 - 14317.095: 68.1614% ( 187) 00:08:17.256 14317.095 - 14417.920: 70.1948% ( 190) 00:08:17.256 14417.920 - 14518.745: 72.1426% ( 182) 00:08:17.256 14518.745 - 14619.569: 73.8977% ( 164) 00:08:17.256 14619.569 - 14720.394: 75.6314% ( 162) 00:08:17.256 14720.394 - 14821.218: 77.0013% ( 128) 00:08:17.256 14821.218 - 14922.043: 78.6708% ( 156) 00:08:17.256 14922.043 - 15022.868: 79.9122% ( 116) 00:08:17.256 15022.868 - 15123.692: 80.9824% ( 100) 00:08:17.256 15123.692 - 15224.517: 81.9456% ( 90) 00:08:17.256 15224.517 - 15325.342: 83.1229% ( 110) 00:08:17.256 15325.342 - 15426.166: 84.2787% ( 108) 00:08:17.256 15426.166 - 15526.991: 85.1562% ( 82) 00:08:17.256 15526.991 - 15627.815: 86.1729% ( 95) 00:08:17.256 15627.815 - 15728.640: 87.1468% ( 91) 00:08:17.256 15728.640 - 15829.465: 88.0030% ( 80) 00:08:17.256 15829.465 - 15930.289: 88.8485% ( 79) 00:08:17.256 15930.289 - 16031.114: 89.7581% ( 85) 00:08:17.256 16031.114 - 16131.938: 90.3789% ( 58) 00:08:17.256 16131.938 - 16232.763: 91.0638% ( 64) 00:08:17.256 16232.763 - 16333.588: 91.7273% ( 62) 00:08:17.256 16333.588 - 16434.412: 92.3801% ( 61) 00:08:17.256 16434.412 - 16535.237: 92.9473% ( 53) 00:08:17.256 16535.237 - 16636.062: 93.3326% ( 36) 00:08:17.256 16636.062 - 16736.886: 93.7607% ( 40) 00:08:17.256 16736.886 - 16837.711: 94.2958% ( 50) 00:08:17.256 16837.711 - 16938.535: 94.7025% ( 38) 00:08:17.256 16938.535 - 17039.360: 95.0985% ( 37) 00:08:17.256 17039.360 - 17140.185: 95.4302% ( 31) 00:08:17.256 17140.185 - 17241.009: 95.8797% ( 42) 00:08:17.256 17241.009 - 17341.834: 96.1901% ( 29) 00:08:17.256 17341.834 - 17442.658: 96.4576% ( 25) 00:08:17.256 17442.658 - 17543.483: 96.6824% ( 21) 00:08:17.256 17543.483 - 17644.308: 96.9606% ( 26) 00:08:17.256 17644.308 - 17745.132: 97.1747% ( 20) 00:08:17.256 17745.132 - 17845.957: 97.3031% ( 12) 00:08:17.256 17845.957 - 17946.782: 97.4315% ( 12) 00:08:17.256 17946.782 - 18047.606: 97.6134% ( 17) 00:08:17.256 18047.606 - 18148.431: 97.7740% ( 15) 00:08:17.256 18148.431 - 18249.255: 97.8810% ( 10) 00:08:17.256 18249.255 - 18350.080: 98.0522% ( 16) 00:08:17.256 18350.080 - 18450.905: 98.1699% ( 11) 00:08:17.256 18450.905 - 18551.729: 98.2984% ( 12) 00:08:17.256 18551.729 - 18652.554: 98.3840% ( 8) 00:08:17.256 18652.554 - 18753.378: 98.4803% ( 9) 00:08:17.256 18753.378 - 18854.203: 98.5445% ( 6) 00:08:17.256 18854.203 - 18955.028: 98.6301% ( 8) 00:08:17.256 21475.643 - 21576.468: 98.6836% ( 5) 00:08:17.256 21576.468 - 21677.292: 98.7586% ( 7) 00:08:17.256 21677.292 - 21778.117: 98.8121% ( 5) 00:08:17.256 21778.117 - 21878.942: 98.8656% ( 5) 00:08:17.256 21878.942 - 21979.766: 98.9298% ( 6) 00:08:17.256 21979.766 - 22080.591: 99.0261% ( 9) 00:08:17.256 22080.591 - 22181.415: 99.0582% ( 3) 00:08:17.256 22181.415 - 22282.240: 99.1331% ( 7) 00:08:17.256 22282.240 - 22383.065: 99.1866% ( 5) 00:08:17.256 22383.065 - 22483.889: 99.2723% ( 8) 00:08:17.256 22483.889 - 22584.714: 99.2937% ( 2) 00:08:17.256 22584.714 - 22685.538: 99.3151% ( 2) 00:08:17.256 28432.542 - 28634.191: 99.3900% ( 7) 00:08:17.256 28634.191 - 28835.840: 99.5291% ( 13) 00:08:17.256 28835.840 - 29037.489: 99.6575% ( 12) 00:08:17.256 29037.489 - 29239.138: 99.7753% ( 11) 00:08:17.256 29239.138 - 29440.788: 99.8930% ( 11) 00:08:17.256 29440.788 - 29642.437: 100.0000% ( 10) 00:08:17.256 00:08:17.256 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:17.256 ============================================================================== 00:08:17.256 Range in us Cumulative IO count 00:08:17.256 8318.031 - 8368.443: 0.0214% ( 2) 00:08:17.256 8368.443 - 8418.855: 0.0642% ( 4) 00:08:17.256 8418.855 - 8469.268: 0.1177% ( 5) 00:08:17.256 8469.268 - 8519.680: 0.1712% ( 5) 00:08:17.256 8519.680 - 8570.092: 0.2247% ( 5) 00:08:17.256 8570.092 - 8620.505: 0.2676% ( 4) 00:08:17.256 8620.505 - 8670.917: 0.3211% ( 5) 00:08:17.256 8670.917 - 8721.329: 0.3639% ( 4) 00:08:17.256 8721.329 - 8771.742: 0.4174% ( 5) 00:08:17.256 8771.742 - 8822.154: 0.4602% ( 4) 00:08:17.256 8822.154 - 8872.566: 0.5137% ( 5) 00:08:17.257 8872.566 - 8922.978: 0.5565% ( 4) 00:08:17.257 8922.978 - 8973.391: 0.5993% ( 4) 00:08:17.257 8973.391 - 9023.803: 0.6421% ( 4) 00:08:17.257 9023.803 - 9074.215: 0.6849% ( 4) 00:08:17.257 10536.172 - 10586.585: 0.7491% ( 6) 00:08:17.257 10586.585 - 10636.997: 0.8241% ( 7) 00:08:17.257 10636.997 - 10687.409: 0.9525% ( 12) 00:08:17.257 10687.409 - 10737.822: 1.0809% ( 12) 00:08:17.257 10737.822 - 10788.234: 1.1986% ( 11) 00:08:17.257 10788.234 - 10838.646: 1.3378% ( 13) 00:08:17.257 10838.646 - 10889.058: 1.4127% ( 7) 00:08:17.257 10889.058 - 10939.471: 1.5197% ( 10) 00:08:17.257 10939.471 - 10989.883: 1.6588% ( 13) 00:08:17.257 10989.883 - 11040.295: 1.9050% ( 23) 00:08:17.257 11040.295 - 11090.708: 2.1618% ( 24) 00:08:17.257 11090.708 - 11141.120: 2.4936% ( 31) 00:08:17.257 11141.120 - 11191.532: 2.8467% ( 33) 00:08:17.257 11191.532 - 11241.945: 3.2962% ( 42) 00:08:17.257 11241.945 - 11292.357: 3.8420% ( 51) 00:08:17.257 11292.357 - 11342.769: 4.3664% ( 49) 00:08:17.257 11342.769 - 11393.182: 5.0300% ( 62) 00:08:17.257 11393.182 - 11443.594: 5.7577% ( 68) 00:08:17.257 11443.594 - 11494.006: 6.5068% ( 70) 00:08:17.257 11494.006 - 11544.418: 7.3202% ( 76) 00:08:17.257 11544.418 - 11594.831: 8.1871% ( 81) 00:08:17.257 11594.831 - 11645.243: 9.2252% ( 97) 00:08:17.257 11645.243 - 11695.655: 10.2312% ( 94) 00:08:17.257 11695.655 - 11746.068: 11.1729% ( 88) 00:08:17.257 11746.068 - 11796.480: 12.1789% ( 94) 00:08:17.257 11796.480 - 11846.892: 13.3134% ( 106) 00:08:17.257 11846.892 - 11897.305: 14.4692% ( 108) 00:08:17.257 11897.305 - 11947.717: 15.5929% ( 105) 00:08:17.257 11947.717 - 11998.129: 16.7487% ( 108) 00:08:17.257 11998.129 - 12048.542: 18.0330% ( 120) 00:08:17.257 12048.542 - 12098.954: 19.2423% ( 113) 00:08:17.257 12098.954 - 12149.366: 20.4409% ( 112) 00:08:17.257 12149.366 - 12199.778: 21.6074% ( 109) 00:08:17.257 12199.778 - 12250.191: 22.7312% ( 105) 00:08:17.257 12250.191 - 12300.603: 23.8335% ( 103) 00:08:17.257 12300.603 - 12351.015: 24.7646% ( 87) 00:08:17.257 12351.015 - 12401.428: 25.7598% ( 93) 00:08:17.257 12401.428 - 12451.840: 26.8408% ( 101) 00:08:17.257 12451.840 - 12502.252: 27.9966% ( 108) 00:08:17.257 12502.252 - 12552.665: 29.0347% ( 97) 00:08:17.257 12552.665 - 12603.077: 29.9872% ( 89) 00:08:17.257 12603.077 - 12653.489: 30.9932% ( 94) 00:08:17.257 12653.489 - 12703.902: 32.1169% ( 105) 00:08:17.257 12703.902 - 12754.314: 33.3048% ( 111) 00:08:17.257 12754.314 - 12804.726: 34.5355% ( 115) 00:08:17.257 12804.726 - 12855.138: 35.6271% ( 102) 00:08:17.257 12855.138 - 12905.551: 36.8258% ( 112) 00:08:17.257 12905.551 - 13006.375: 39.0625% ( 209) 00:08:17.257 13006.375 - 13107.200: 41.4277% ( 221) 00:08:17.257 13107.200 - 13208.025: 43.9533% ( 236) 00:08:17.257 13208.025 - 13308.849: 46.3078% ( 220) 00:08:17.257 13308.849 - 13409.674: 48.3840% ( 194) 00:08:17.257 13409.674 - 13510.498: 50.7384% ( 220) 00:08:17.257 13510.498 - 13611.323: 53.1143% ( 222) 00:08:17.257 13611.323 - 13712.148: 55.4688% ( 220) 00:08:17.257 13712.148 - 13812.972: 57.8446% ( 222) 00:08:17.257 13812.972 - 13913.797: 60.0813% ( 209) 00:08:17.257 13913.797 - 14014.622: 62.6177% ( 237) 00:08:17.257 14014.622 - 14115.446: 65.1862% ( 240) 00:08:17.257 14115.446 - 14216.271: 67.4229% ( 209) 00:08:17.257 14216.271 - 14317.095: 69.8202% ( 224) 00:08:17.257 14317.095 - 14417.920: 72.0034% ( 204) 00:08:17.257 14417.920 - 14518.745: 73.8228% ( 170) 00:08:17.257 14518.745 - 14619.569: 75.4067% ( 148) 00:08:17.257 14619.569 - 14720.394: 76.6588% ( 117) 00:08:17.257 14720.394 - 14821.218: 78.0180% ( 127) 00:08:17.257 14821.218 - 14922.043: 79.5805% ( 146) 00:08:17.257 14922.043 - 15022.868: 80.7470% ( 109) 00:08:17.257 15022.868 - 15123.692: 81.7637% ( 95) 00:08:17.257 15123.692 - 15224.517: 82.6627% ( 84) 00:08:17.257 15224.517 - 15325.342: 83.6045% ( 88) 00:08:17.257 15325.342 - 15426.166: 84.5034% ( 84) 00:08:17.257 15426.166 - 15526.991: 85.4559% ( 89) 00:08:17.257 15526.991 - 15627.815: 86.5047% ( 98) 00:08:17.257 15627.815 - 15728.640: 87.3288% ( 77) 00:08:17.257 15728.640 - 15829.465: 88.0565% ( 68) 00:08:17.257 15829.465 - 15930.289: 88.7842% ( 68) 00:08:17.257 15930.289 - 16031.114: 89.4157% ( 59) 00:08:17.257 16031.114 - 16131.938: 89.9401% ( 49) 00:08:17.257 16131.938 - 16232.763: 90.5929% ( 61) 00:08:17.257 16232.763 - 16333.588: 91.2136% ( 58) 00:08:17.257 16333.588 - 16434.412: 91.9199% ( 66) 00:08:17.257 16434.412 - 16535.237: 92.4015% ( 45) 00:08:17.257 16535.237 - 16636.062: 92.7975% ( 37) 00:08:17.257 16636.062 - 16736.886: 93.1614% ( 34) 00:08:17.257 16736.886 - 16837.711: 93.5574% ( 37) 00:08:17.257 16837.711 - 16938.535: 94.0283% ( 44) 00:08:17.257 16938.535 - 17039.360: 94.5312% ( 47) 00:08:17.257 17039.360 - 17140.185: 94.9486% ( 39) 00:08:17.257 17140.185 - 17241.009: 95.2269% ( 26) 00:08:17.257 17241.009 - 17341.834: 95.5158% ( 27) 00:08:17.257 17341.834 - 17442.658: 95.8583% ( 32) 00:08:17.257 17442.658 - 17543.483: 96.2329% ( 35) 00:08:17.257 17543.483 - 17644.308: 96.5967% ( 34) 00:08:17.257 17644.308 - 17745.132: 96.9178% ( 30) 00:08:17.257 17745.132 - 17845.957: 97.2068% ( 27) 00:08:17.257 17845.957 - 17946.782: 97.4850% ( 26) 00:08:17.257 17946.782 - 18047.606: 97.7419% ( 24) 00:08:17.257 18047.606 - 18148.431: 97.9666% ( 21) 00:08:17.257 18148.431 - 18249.255: 98.1164% ( 14) 00:08:17.257 18249.255 - 18350.080: 98.2877% ( 16) 00:08:17.257 18350.080 - 18450.905: 98.4375% ( 14) 00:08:17.257 18450.905 - 18551.729: 98.5338% ( 9) 00:08:17.257 18551.729 - 18652.554: 98.5766% ( 4) 00:08:17.257 18652.554 - 18753.378: 98.6194% ( 4) 00:08:17.257 18753.378 - 18854.203: 98.6301% ( 1) 00:08:17.257 21677.292 - 21778.117: 98.6729% ( 4) 00:08:17.257 21778.117 - 21878.942: 98.7479% ( 7) 00:08:17.257 21878.942 - 21979.766: 98.8121% ( 6) 00:08:17.257 21979.766 - 22080.591: 98.8870% ( 7) 00:08:17.257 22080.591 - 22181.415: 98.9619% ( 7) 00:08:17.257 22181.415 - 22282.240: 99.0261% ( 6) 00:08:17.257 22282.240 - 22383.065: 99.0903% ( 6) 00:08:17.257 22383.065 - 22483.889: 99.1652% ( 7) 00:08:17.257 22483.889 - 22584.714: 99.2402% ( 7) 00:08:17.257 22584.714 - 22685.538: 99.3151% ( 7) 00:08:17.257 28029.243 - 28230.892: 99.3365% ( 2) 00:08:17.257 28230.892 - 28432.542: 99.4863% ( 14) 00:08:17.257 28432.542 - 28634.191: 99.6254% ( 13) 00:08:17.257 28634.191 - 28835.840: 99.7539% ( 12) 00:08:17.257 28835.840 - 29037.489: 99.9037% ( 14) 00:08:17.257 29037.489 - 29239.138: 100.0000% ( 9) 00:08:17.257 00:08:17.257 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:17.257 ============================================================================== 00:08:17.257 Range in us Cumulative IO count 00:08:17.257 6755.249 - 6805.662: 0.0428% ( 4) 00:08:17.257 6805.662 - 6856.074: 0.0856% ( 4) 00:08:17.257 6856.074 - 6906.486: 0.1498% ( 6) 00:08:17.257 6906.486 - 6956.898: 0.2033% ( 5) 00:08:17.257 6956.898 - 7007.311: 0.2568% ( 5) 00:08:17.257 7007.311 - 7057.723: 0.3104% ( 5) 00:08:17.257 7057.723 - 7108.135: 0.3532% ( 4) 00:08:17.257 7108.135 - 7158.548: 0.3960% ( 4) 00:08:17.257 7158.548 - 7208.960: 0.4495% ( 5) 00:08:17.257 7208.960 - 7259.372: 0.4923% ( 4) 00:08:17.257 7259.372 - 7309.785: 0.5351% ( 4) 00:08:17.257 7309.785 - 7360.197: 0.5886% ( 5) 00:08:17.257 7360.197 - 7410.609: 0.6421% ( 5) 00:08:17.257 7410.609 - 7461.022: 0.6849% ( 4) 00:08:17.257 10384.935 - 10435.348: 0.7063% ( 2) 00:08:17.257 10435.348 - 10485.760: 0.7384% ( 3) 00:08:17.257 10485.760 - 10536.172: 0.7812% ( 4) 00:08:17.257 10536.172 - 10586.585: 0.8134% ( 3) 00:08:17.257 10586.585 - 10636.997: 0.8562% ( 4) 00:08:17.257 10636.997 - 10687.409: 0.8990% ( 4) 00:08:17.257 10687.409 - 10737.822: 1.0060% ( 10) 00:08:17.257 10737.822 - 10788.234: 1.0809% ( 7) 00:08:17.257 10788.234 - 10838.646: 1.2093% ( 12) 00:08:17.257 10838.646 - 10889.058: 1.3485% ( 13) 00:08:17.257 10889.058 - 10939.471: 1.5090% ( 15) 00:08:17.257 10939.471 - 10989.883: 1.7337% ( 21) 00:08:17.257 10989.883 - 11040.295: 1.9906% ( 24) 00:08:17.257 11040.295 - 11090.708: 2.4080% ( 39) 00:08:17.257 11090.708 - 11141.120: 2.7611% ( 33) 00:08:17.257 11141.120 - 11191.532: 3.1571% ( 37) 00:08:17.257 11191.532 - 11241.945: 3.6173% ( 43) 00:08:17.257 11241.945 - 11292.357: 4.0882% ( 44) 00:08:17.257 11292.357 - 11342.769: 4.7410% ( 61) 00:08:17.257 11342.769 - 11393.182: 5.3403% ( 56) 00:08:17.257 11393.182 - 11443.594: 6.0467% ( 66) 00:08:17.257 11443.594 - 11494.006: 6.9884% ( 88) 00:08:17.257 11494.006 - 11544.418: 7.8232% ( 78) 00:08:17.257 11544.418 - 11594.831: 8.6901% ( 81) 00:08:17.257 11594.831 - 11645.243: 9.5997% ( 85) 00:08:17.257 11645.243 - 11695.655: 10.6699% ( 100) 00:08:17.257 11695.655 - 11746.068: 11.6759% ( 94) 00:08:17.257 11746.068 - 11796.480: 12.8104% ( 106) 00:08:17.257 11796.480 - 11846.892: 13.9876% ( 110) 00:08:17.257 11846.892 - 11897.305: 15.2183% ( 115) 00:08:17.257 11897.305 - 11947.717: 16.2564% ( 97) 00:08:17.257 11947.717 - 11998.129: 17.3373% ( 101) 00:08:17.257 11998.129 - 12048.542: 18.5360% ( 112) 00:08:17.257 12048.542 - 12098.954: 19.7453% ( 113) 00:08:17.257 12098.954 - 12149.366: 20.9332% ( 111) 00:08:17.257 12149.366 - 12199.778: 22.1104% ( 110) 00:08:17.257 12199.778 - 12250.191: 23.1807% ( 100) 00:08:17.257 12250.191 - 12300.603: 24.2188% ( 97) 00:08:17.257 12300.603 - 12351.015: 25.3211% ( 103) 00:08:17.257 12351.015 - 12401.428: 26.3913% ( 100) 00:08:17.257 12401.428 - 12451.840: 27.5150% ( 105) 00:08:17.257 12451.840 - 12502.252: 28.6280% ( 104) 00:08:17.257 12502.252 - 12552.665: 29.7410% ( 104) 00:08:17.258 12552.665 - 12603.077: 30.7256% ( 92) 00:08:17.258 12603.077 - 12653.489: 31.6995% ( 91) 00:08:17.258 12653.489 - 12703.902: 32.7162% ( 95) 00:08:17.258 12703.902 - 12754.314: 33.7008% ( 92) 00:08:17.258 12754.314 - 12804.726: 34.6640% ( 90) 00:08:17.258 12804.726 - 12855.138: 35.7449% ( 101) 00:08:17.258 12855.138 - 12905.551: 36.7616% ( 95) 00:08:17.258 12905.551 - 13006.375: 38.9662% ( 206) 00:08:17.258 13006.375 - 13107.200: 41.0531% ( 195) 00:08:17.258 13107.200 - 13208.025: 43.2898% ( 209) 00:08:17.258 13208.025 - 13308.849: 46.1473% ( 267) 00:08:17.258 13308.849 - 13409.674: 48.6943% ( 238) 00:08:17.258 13409.674 - 13510.498: 51.1451% ( 229) 00:08:17.258 13510.498 - 13611.323: 53.3390% ( 205) 00:08:17.258 13611.323 - 13712.148: 55.6721% ( 218) 00:08:17.258 13712.148 - 13812.972: 58.1550% ( 232) 00:08:17.258 13812.972 - 13913.797: 60.6485% ( 233) 00:08:17.258 13913.797 - 14014.622: 63.0779% ( 227) 00:08:17.258 14014.622 - 14115.446: 65.3253% ( 210) 00:08:17.258 14115.446 - 14216.271: 67.4443% ( 198) 00:08:17.258 14216.271 - 14317.095: 69.3065% ( 174) 00:08:17.258 14317.095 - 14417.920: 71.0081% ( 159) 00:08:17.258 14417.920 - 14518.745: 72.8596% ( 173) 00:08:17.258 14518.745 - 14619.569: 74.7003% ( 172) 00:08:17.258 14619.569 - 14720.394: 76.5518% ( 173) 00:08:17.258 14720.394 - 14821.218: 77.9324% ( 129) 00:08:17.258 14821.218 - 14922.043: 79.2487% ( 123) 00:08:17.258 14922.043 - 15022.868: 80.6186% ( 128) 00:08:17.258 15022.868 - 15123.692: 81.8814% ( 118) 00:08:17.258 15123.692 - 15224.517: 83.2727% ( 130) 00:08:17.258 15224.517 - 15325.342: 84.3964% ( 105) 00:08:17.258 15325.342 - 15426.166: 85.5415% ( 107) 00:08:17.258 15426.166 - 15526.991: 86.5903% ( 98) 00:08:17.258 15526.991 - 15627.815: 87.5321% ( 88) 00:08:17.258 15627.815 - 15728.640: 88.4739% ( 88) 00:08:17.258 15728.640 - 15829.465: 89.5655% ( 102) 00:08:17.258 15829.465 - 15930.289: 90.4217% ( 80) 00:08:17.258 15930.289 - 16031.114: 91.0424% ( 58) 00:08:17.258 16031.114 - 16131.938: 91.6738% ( 59) 00:08:17.258 16131.938 - 16232.763: 92.2517% ( 54) 00:08:17.258 16232.763 - 16333.588: 92.6798% ( 40) 00:08:17.258 16333.588 - 16434.412: 92.9795% ( 28) 00:08:17.258 16434.412 - 16535.237: 93.2256% ( 23) 00:08:17.258 16535.237 - 16636.062: 93.4289% ( 19) 00:08:17.258 16636.062 - 16736.886: 93.6002% ( 16) 00:08:17.258 16736.886 - 16837.711: 93.7286% ( 12) 00:08:17.258 16837.711 - 16938.535: 93.8570% ( 12) 00:08:17.258 16938.535 - 17039.360: 93.9640% ( 10) 00:08:17.258 17039.360 - 17140.185: 94.1353% ( 16) 00:08:17.258 17140.185 - 17241.009: 94.4135% ( 26) 00:08:17.258 17241.009 - 17341.834: 94.7346% ( 30) 00:08:17.258 17341.834 - 17442.658: 95.2162% ( 45) 00:08:17.258 17442.658 - 17543.483: 95.5693% ( 33) 00:08:17.258 17543.483 - 17644.308: 95.8155% ( 23) 00:08:17.258 17644.308 - 17745.132: 96.1473% ( 31) 00:08:17.258 17745.132 - 17845.957: 96.4362% ( 27) 00:08:17.258 17845.957 - 17946.782: 96.6503% ( 20) 00:08:17.258 17946.782 - 18047.606: 96.8536% ( 19) 00:08:17.258 18047.606 - 18148.431: 97.0569% ( 19) 00:08:17.258 18148.431 - 18249.255: 97.2817% ( 21) 00:08:17.258 18249.255 - 18350.080: 97.4636% ( 17) 00:08:17.258 18350.080 - 18450.905: 97.6241% ( 15) 00:08:17.258 18450.905 - 18551.729: 97.8168% ( 18) 00:08:17.258 18551.729 - 18652.554: 97.9880% ( 16) 00:08:17.258 18652.554 - 18753.378: 98.1164% ( 12) 00:08:17.258 18753.378 - 18854.203: 98.2449% ( 12) 00:08:17.258 18854.203 - 18955.028: 98.3519% ( 10) 00:08:17.258 18955.028 - 19055.852: 98.4589% ( 10) 00:08:17.258 19055.852 - 19156.677: 98.5338% ( 7) 00:08:17.258 19156.677 - 19257.502: 98.5980% ( 6) 00:08:17.258 19257.502 - 19358.326: 98.6301% ( 3) 00:08:17.258 22383.065 - 22483.889: 98.6515% ( 2) 00:08:17.258 22483.889 - 22584.714: 98.7158% ( 6) 00:08:17.258 22584.714 - 22685.538: 98.7907% ( 7) 00:08:17.258 22685.538 - 22786.363: 98.8549% ( 6) 00:08:17.258 22786.363 - 22887.188: 98.9191% ( 6) 00:08:17.258 22887.188 - 22988.012: 98.9940% ( 7) 00:08:17.258 22988.012 - 23088.837: 99.0689% ( 7) 00:08:17.258 23088.837 - 23189.662: 99.1331% ( 6) 00:08:17.258 23189.662 - 23290.486: 99.2080% ( 7) 00:08:17.258 23290.486 - 23391.311: 99.2723% ( 6) 00:08:17.258 23391.311 - 23492.135: 99.3151% ( 4) 00:08:17.258 28835.840 - 29037.489: 99.4435% ( 12) 00:08:17.258 29037.489 - 29239.138: 99.5826% ( 13) 00:08:17.258 29239.138 - 29440.788: 99.7217% ( 13) 00:08:17.258 29440.788 - 29642.437: 99.8716% ( 14) 00:08:17.258 29642.437 - 29844.086: 100.0000% ( 12) 00:08:17.258 00:08:17.258 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:17.258 ============================================================================== 00:08:17.258 Range in us Cumulative IO count 00:08:17.258 6024.271 - 6049.477: 0.0107% ( 1) 00:08:17.258 6049.477 - 6074.683: 0.0321% ( 2) 00:08:17.258 6074.683 - 6099.889: 0.0642% ( 3) 00:08:17.258 6099.889 - 6125.095: 0.0856% ( 2) 00:08:17.258 6125.095 - 6150.302: 0.1070% ( 2) 00:08:17.258 6150.302 - 6175.508: 0.1284% ( 2) 00:08:17.258 6175.508 - 6200.714: 0.1712% ( 4) 00:08:17.258 6200.714 - 6225.920: 0.1926% ( 2) 00:08:17.258 6225.920 - 6251.126: 0.2140% ( 2) 00:08:17.258 6251.126 - 6276.332: 0.2354% ( 2) 00:08:17.258 6276.332 - 6301.538: 0.2676% ( 3) 00:08:17.258 6301.538 - 6326.745: 0.2890% ( 2) 00:08:17.258 6326.745 - 6351.951: 0.3104% ( 2) 00:08:17.258 6351.951 - 6377.157: 0.3318% ( 2) 00:08:17.258 6377.157 - 6402.363: 0.3639% ( 3) 00:08:17.258 6402.363 - 6427.569: 0.3853% ( 2) 00:08:17.258 6427.569 - 6452.775: 0.4067% ( 2) 00:08:17.258 6452.775 - 6503.188: 0.4602% ( 5) 00:08:17.258 6503.188 - 6553.600: 0.4923% ( 3) 00:08:17.258 6553.600 - 6604.012: 0.5351% ( 4) 00:08:17.258 6604.012 - 6654.425: 0.5886% ( 5) 00:08:17.258 6654.425 - 6704.837: 0.6421% ( 5) 00:08:17.258 6704.837 - 6755.249: 0.6742% ( 3) 00:08:17.258 6755.249 - 6805.662: 0.6849% ( 1) 00:08:17.258 10536.172 - 10586.585: 0.7170% ( 3) 00:08:17.258 10586.585 - 10636.997: 0.8027% ( 8) 00:08:17.258 10636.997 - 10687.409: 0.9311% ( 12) 00:08:17.258 10687.409 - 10737.822: 1.0274% ( 9) 00:08:17.258 10737.822 - 10788.234: 1.0916% ( 6) 00:08:17.258 10788.234 - 10838.646: 1.1879% ( 9) 00:08:17.258 10838.646 - 10889.058: 1.3378% ( 14) 00:08:17.258 10889.058 - 10939.471: 1.4662% ( 12) 00:08:17.258 10939.471 - 10989.883: 1.6267% ( 15) 00:08:17.258 10989.883 - 11040.295: 1.7444% ( 11) 00:08:17.258 11040.295 - 11090.708: 1.9585% ( 20) 00:08:17.258 11090.708 - 11141.120: 2.2902% ( 31) 00:08:17.258 11141.120 - 11191.532: 2.7932% ( 47) 00:08:17.258 11191.532 - 11241.945: 3.2534% ( 43) 00:08:17.258 11241.945 - 11292.357: 3.7457% ( 46) 00:08:17.258 11292.357 - 11342.769: 4.1631% ( 39) 00:08:17.258 11342.769 - 11393.182: 4.7517% ( 55) 00:08:17.258 11393.182 - 11443.594: 5.4259% ( 63) 00:08:17.258 11443.594 - 11494.006: 6.1002% ( 63) 00:08:17.258 11494.006 - 11544.418: 6.8707% ( 72) 00:08:17.258 11544.418 - 11594.831: 7.5771% ( 66) 00:08:17.258 11594.831 - 11645.243: 8.3797% ( 75) 00:08:17.258 11645.243 - 11695.655: 9.3001% ( 86) 00:08:17.258 11695.655 - 11746.068: 10.1777% ( 82) 00:08:17.258 11746.068 - 11796.480: 11.0231% ( 79) 00:08:17.258 11796.480 - 11846.892: 12.0505% ( 96) 00:08:17.258 11846.892 - 11897.305: 13.4311% ( 129) 00:08:17.258 11897.305 - 11947.717: 14.9722% ( 144) 00:08:17.258 11947.717 - 11998.129: 16.2029% ( 115) 00:08:17.258 11998.129 - 12048.542: 17.5407% ( 125) 00:08:17.258 12048.542 - 12098.954: 18.7500% ( 113) 00:08:17.258 12098.954 - 12149.366: 19.9379% ( 111) 00:08:17.258 12149.366 - 12199.778: 21.1152% ( 110) 00:08:17.258 12199.778 - 12250.191: 22.5599% ( 135) 00:08:17.258 12250.191 - 12300.603: 23.7586% ( 112) 00:08:17.258 12300.603 - 12351.015: 25.1498% ( 130) 00:08:17.258 12351.015 - 12401.428: 26.5304% ( 129) 00:08:17.258 12401.428 - 12451.840: 27.8146% ( 120) 00:08:17.258 12451.840 - 12502.252: 29.0882% ( 119) 00:08:17.258 12502.252 - 12552.665: 30.1584% ( 100) 00:08:17.258 12552.665 - 12603.077: 31.3142% ( 108) 00:08:17.258 12603.077 - 12653.489: 32.5664% ( 117) 00:08:17.258 12653.489 - 12703.902: 33.7008% ( 106) 00:08:17.258 12703.902 - 12754.314: 34.7175% ( 95) 00:08:17.258 12754.314 - 12804.726: 35.7449% ( 96) 00:08:17.258 12804.726 - 12855.138: 36.8258% ( 101) 00:08:17.258 12855.138 - 12905.551: 37.8532% ( 96) 00:08:17.258 12905.551 - 13006.375: 40.0578% ( 206) 00:08:17.258 13006.375 - 13107.200: 42.1447% ( 195) 00:08:17.258 13107.200 - 13208.025: 44.0497% ( 178) 00:08:17.258 13208.025 - 13308.849: 45.8797% ( 171) 00:08:17.258 13308.849 - 13409.674: 48.0094% ( 199) 00:08:17.258 13409.674 - 13510.498: 50.2568% ( 210) 00:08:17.258 13510.498 - 13611.323: 52.3973% ( 200) 00:08:17.258 13611.323 - 13712.148: 54.8694% ( 231) 00:08:17.258 13712.148 - 13812.972: 57.4058% ( 237) 00:08:17.258 13812.972 - 13913.797: 59.6854% ( 213) 00:08:17.258 13913.797 - 14014.622: 61.9863% ( 215) 00:08:17.258 14014.622 - 14115.446: 63.9555% ( 184) 00:08:17.258 14115.446 - 14216.271: 66.0210% ( 193) 00:08:17.258 14216.271 - 14317.095: 68.0544% ( 190) 00:08:17.258 14317.095 - 14417.920: 70.1734% ( 198) 00:08:17.258 14417.920 - 14518.745: 72.3031% ( 199) 00:08:17.258 14518.745 - 14619.569: 74.3151% ( 188) 00:08:17.258 14619.569 - 14720.394: 76.1023% ( 167) 00:08:17.258 14720.394 - 14821.218: 77.6862% ( 148) 00:08:17.258 14821.218 - 14922.043: 79.2273% ( 144) 00:08:17.258 14922.043 - 15022.868: 80.6828% ( 136) 00:08:17.258 15022.868 - 15123.692: 82.2239% ( 144) 00:08:17.259 15123.692 - 15224.517: 83.5938% ( 128) 00:08:17.259 15224.517 - 15325.342: 84.8245% ( 115) 00:08:17.259 15325.342 - 15426.166: 86.0766% ( 117) 00:08:17.259 15426.166 - 15526.991: 87.2967% ( 114) 00:08:17.259 15526.991 - 15627.815: 88.2705% ( 91) 00:08:17.259 15627.815 - 15728.640: 89.0946% ( 77) 00:08:17.259 15728.640 - 15829.465: 89.9829% ( 83) 00:08:17.259 15829.465 - 15930.289: 90.8069% ( 77) 00:08:17.259 15930.289 - 16031.114: 91.5668% ( 71) 00:08:17.259 16031.114 - 16131.938: 92.2303% ( 62) 00:08:17.259 16131.938 - 16232.763: 92.6798% ( 42) 00:08:17.259 16232.763 - 16333.588: 93.0651% ( 36) 00:08:17.259 16333.588 - 16434.412: 93.3433% ( 26) 00:08:17.259 16434.412 - 16535.237: 93.5360% ( 18) 00:08:17.259 16535.237 - 16636.062: 93.7179% ( 17) 00:08:17.259 16636.062 - 16736.886: 94.0176% ( 28) 00:08:17.259 16736.886 - 16837.711: 94.2316% ( 20) 00:08:17.259 16837.711 - 16938.535: 94.4563% ( 21) 00:08:17.259 16938.535 - 17039.360: 94.6918% ( 22) 00:08:17.259 17039.360 - 17140.185: 94.9165% ( 21) 00:08:17.259 17140.185 - 17241.009: 95.3874% ( 44) 00:08:17.259 17241.009 - 17341.834: 95.7406% ( 33) 00:08:17.259 17341.834 - 17442.658: 96.0830% ( 32) 00:08:17.259 17442.658 - 17543.483: 96.3613% ( 26) 00:08:17.259 17543.483 - 17644.308: 96.6717% ( 29) 00:08:17.259 17644.308 - 17745.132: 96.9285% ( 24) 00:08:17.259 17745.132 - 17845.957: 97.1104% ( 17) 00:08:17.259 17845.957 - 17946.782: 97.2710% ( 15) 00:08:17.259 17946.782 - 18047.606: 97.4636% ( 18) 00:08:17.259 18047.606 - 18148.431: 97.6562% ( 18) 00:08:17.259 18148.431 - 18249.255: 97.7954% ( 13) 00:08:17.259 18249.255 - 18350.080: 97.9024% ( 10) 00:08:17.259 18350.080 - 18450.905: 98.0094% ( 10) 00:08:17.259 18450.905 - 18551.729: 98.1164% ( 10) 00:08:17.259 18551.729 - 18652.554: 98.2235% ( 10) 00:08:17.259 18652.554 - 18753.378: 98.3198% ( 9) 00:08:17.259 18753.378 - 18854.203: 98.3840% ( 6) 00:08:17.259 18854.203 - 18955.028: 98.4696% ( 8) 00:08:17.259 18955.028 - 19055.852: 98.5338% ( 6) 00:08:17.259 19055.852 - 19156.677: 98.5873% ( 5) 00:08:17.259 19156.677 - 19257.502: 98.6194% ( 3) 00:08:17.259 19257.502 - 19358.326: 98.6301% ( 1) 00:08:17.259 22080.591 - 22181.415: 98.6943% ( 6) 00:08:17.259 22181.415 - 22282.240: 98.7586% ( 6) 00:08:17.259 22282.240 - 22383.065: 98.8335% ( 7) 00:08:17.259 22383.065 - 22483.889: 98.9084% ( 7) 00:08:17.259 22483.889 - 22584.714: 98.9726% ( 6) 00:08:17.259 22584.714 - 22685.538: 99.0261% ( 5) 00:08:17.259 22685.538 - 22786.363: 99.1010% ( 7) 00:08:17.259 22786.363 - 22887.188: 99.1652% ( 6) 00:08:17.259 22887.188 - 22988.012: 99.2402% ( 7) 00:08:17.259 22988.012 - 23088.837: 99.3044% ( 6) 00:08:17.259 23088.837 - 23189.662: 99.3151% ( 1) 00:08:17.259 28432.542 - 28634.191: 99.3365% ( 2) 00:08:17.259 28634.191 - 28835.840: 99.4756% ( 13) 00:08:17.259 28835.840 - 29037.489: 99.6147% ( 13) 00:08:17.259 29037.489 - 29239.138: 99.7539% ( 13) 00:08:17.259 29239.138 - 29440.788: 99.9037% ( 14) 00:08:17.259 29440.788 - 29642.437: 100.0000% ( 9) 00:08:17.259 00:08:17.259 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:17.259 ============================================================================== 00:08:17.259 Range in us Cumulative IO count 00:08:17.259 5242.880 - 5268.086: 0.0321% ( 3) 00:08:17.259 5268.086 - 5293.292: 0.0535% ( 2) 00:08:17.259 5293.292 - 5318.498: 0.0749% ( 2) 00:08:17.259 5318.498 - 5343.705: 0.1177% ( 4) 00:08:17.259 5343.705 - 5368.911: 0.1391% ( 2) 00:08:17.259 5368.911 - 5394.117: 0.1605% ( 2) 00:08:17.259 5394.117 - 5419.323: 0.1819% ( 2) 00:08:17.259 5419.323 - 5444.529: 0.2033% ( 2) 00:08:17.259 5444.529 - 5469.735: 0.2354% ( 3) 00:08:17.259 5469.735 - 5494.942: 0.2568% ( 2) 00:08:17.259 5494.942 - 5520.148: 0.2783% ( 2) 00:08:17.259 5520.148 - 5545.354: 0.2997% ( 2) 00:08:17.259 5545.354 - 5570.560: 0.3211% ( 2) 00:08:17.259 5570.560 - 5595.766: 0.3532% ( 3) 00:08:17.259 5595.766 - 5620.972: 0.3639% ( 1) 00:08:17.259 5620.972 - 5646.178: 0.3853% ( 2) 00:08:17.259 5646.178 - 5671.385: 0.4174% ( 3) 00:08:17.259 5671.385 - 5696.591: 0.4388% ( 2) 00:08:17.259 5696.591 - 5721.797: 0.4602% ( 2) 00:08:17.259 5721.797 - 5747.003: 0.4816% ( 2) 00:08:17.259 5747.003 - 5772.209: 0.5137% ( 3) 00:08:17.259 5772.209 - 5797.415: 0.5351% ( 2) 00:08:17.259 5797.415 - 5822.622: 0.5565% ( 2) 00:08:17.259 5822.622 - 5847.828: 0.5779% ( 2) 00:08:17.259 5847.828 - 5873.034: 0.5993% ( 2) 00:08:17.259 5873.034 - 5898.240: 0.6314% ( 3) 00:08:17.259 5898.240 - 5923.446: 0.6528% ( 2) 00:08:17.259 5923.446 - 5948.652: 0.6742% ( 2) 00:08:17.259 5948.652 - 5973.858: 0.6849% ( 1) 00:08:17.259 10384.935 - 10435.348: 0.7063% ( 2) 00:08:17.259 10435.348 - 10485.760: 0.8455% ( 13) 00:08:17.259 10485.760 - 10536.172: 0.9525% ( 10) 00:08:17.259 10536.172 - 10586.585: 1.0060% ( 5) 00:08:17.259 10586.585 - 10636.997: 1.0488% ( 4) 00:08:17.259 10636.997 - 10687.409: 1.1130% ( 6) 00:08:17.259 10687.409 - 10737.822: 1.1772% ( 6) 00:08:17.259 10737.822 - 10788.234: 1.2735% ( 9) 00:08:17.259 10788.234 - 10838.646: 1.4127% ( 13) 00:08:17.259 10838.646 - 10889.058: 1.5090% ( 9) 00:08:17.259 10889.058 - 10939.471: 1.8086% ( 28) 00:08:17.259 10939.471 - 10989.883: 2.0762% ( 25) 00:08:17.259 10989.883 - 11040.295: 2.3652% ( 27) 00:08:17.259 11040.295 - 11090.708: 2.6969% ( 31) 00:08:17.259 11090.708 - 11141.120: 3.1357% ( 41) 00:08:17.259 11141.120 - 11191.532: 3.5852% ( 42) 00:08:17.259 11191.532 - 11241.945: 4.0454% ( 43) 00:08:17.259 11241.945 - 11292.357: 4.5698% ( 49) 00:08:17.259 11292.357 - 11342.769: 5.1370% ( 53) 00:08:17.259 11342.769 - 11393.182: 5.6507% ( 48) 00:08:17.259 11393.182 - 11443.594: 6.2286% ( 54) 00:08:17.259 11443.594 - 11494.006: 6.9028% ( 63) 00:08:17.259 11494.006 - 11544.418: 7.6948% ( 74) 00:08:17.259 11544.418 - 11594.831: 8.5402% ( 79) 00:08:17.259 11594.831 - 11645.243: 9.3643% ( 77) 00:08:17.259 11645.243 - 11695.655: 10.2526% ( 83) 00:08:17.259 11695.655 - 11746.068: 11.2800% ( 96) 00:08:17.259 11746.068 - 11796.480: 12.4251% ( 107) 00:08:17.259 11796.480 - 11846.892: 13.5702% ( 107) 00:08:17.259 11846.892 - 11897.305: 14.7153% ( 107) 00:08:17.259 11897.305 - 11947.717: 15.8604% ( 107) 00:08:17.259 11947.717 - 11998.129: 17.1982% ( 125) 00:08:17.259 11998.129 - 12048.542: 18.3540% ( 108) 00:08:17.259 12048.542 - 12098.954: 19.4135% ( 99) 00:08:17.259 12098.954 - 12149.366: 20.5908% ( 110) 00:08:17.259 12149.366 - 12199.778: 21.5967% ( 94) 00:08:17.259 12199.778 - 12250.191: 22.6134% ( 95) 00:08:17.259 12250.191 - 12300.603: 23.7693% ( 108) 00:08:17.259 12300.603 - 12351.015: 24.9893% ( 114) 00:08:17.259 12351.015 - 12401.428: 26.0809% ( 102) 00:08:17.259 12401.428 - 12451.840: 27.2260% ( 107) 00:08:17.259 12451.840 - 12502.252: 28.2962% ( 100) 00:08:17.259 12502.252 - 12552.665: 29.6233% ( 124) 00:08:17.259 12552.665 - 12603.077: 30.7898% ( 109) 00:08:17.259 12603.077 - 12653.489: 31.9242% ( 106) 00:08:17.259 12653.489 - 12703.902: 33.0801% ( 108) 00:08:17.259 12703.902 - 12754.314: 34.2894% ( 113) 00:08:17.259 12754.314 - 12804.726: 35.5094% ( 114) 00:08:17.259 12804.726 - 12855.138: 36.6759% ( 109) 00:08:17.259 12855.138 - 12905.551: 37.8211% ( 107) 00:08:17.259 12905.551 - 13006.375: 39.9615% ( 200) 00:08:17.259 13006.375 - 13107.200: 41.9735% ( 188) 00:08:17.259 13107.200 - 13208.025: 44.1032% ( 199) 00:08:17.259 13208.025 - 13308.849: 46.3827% ( 213) 00:08:17.259 13308.849 - 13409.674: 48.6301% ( 210) 00:08:17.259 13409.674 - 13510.498: 51.0809% ( 229) 00:08:17.259 13510.498 - 13611.323: 53.1999% ( 198) 00:08:17.259 13611.323 - 13712.148: 55.2440% ( 191) 00:08:17.259 13712.148 - 13812.972: 57.3523% ( 197) 00:08:17.259 13812.972 - 13913.797: 59.4820% ( 199) 00:08:17.259 13913.797 - 14014.622: 61.6117% ( 199) 00:08:17.259 14014.622 - 14115.446: 63.9662% ( 220) 00:08:17.259 14115.446 - 14216.271: 66.4384% ( 231) 00:08:17.259 14216.271 - 14317.095: 68.6751% ( 209) 00:08:17.259 14317.095 - 14417.920: 70.5265% ( 173) 00:08:17.259 14417.920 - 14518.745: 72.2068% ( 157) 00:08:17.259 14518.745 - 14619.569: 73.9619% ( 164) 00:08:17.259 14619.569 - 14720.394: 75.5886% ( 152) 00:08:17.259 14720.394 - 14821.218: 77.4294% ( 172) 00:08:17.259 14821.218 - 14922.043: 79.0454% ( 151) 00:08:17.259 14922.043 - 15022.868: 80.3831% ( 125) 00:08:17.259 15022.868 - 15123.692: 81.6353% ( 117) 00:08:17.259 15123.692 - 15224.517: 82.9195% ( 120) 00:08:17.259 15224.517 - 15325.342: 84.0753% ( 108) 00:08:17.259 15325.342 - 15426.166: 85.1455% ( 100) 00:08:17.259 15426.166 - 15526.991: 86.3014% ( 108) 00:08:17.259 15526.991 - 15627.815: 87.3502% ( 98) 00:08:17.259 15627.815 - 15728.640: 88.3348% ( 92) 00:08:17.259 15728.640 - 15829.465: 89.2658% ( 87) 00:08:17.259 15829.465 - 15930.289: 90.1006% ( 78) 00:08:17.259 15930.289 - 16031.114: 90.7962% ( 65) 00:08:17.259 16031.114 - 16131.938: 91.4705% ( 63) 00:08:17.259 16131.938 - 16232.763: 92.0805% ( 57) 00:08:17.259 16232.763 - 16333.588: 92.6263% ( 51) 00:08:17.259 16333.588 - 16434.412: 93.2042% ( 54) 00:08:17.259 16434.412 - 16535.237: 93.7072% ( 47) 00:08:17.259 16535.237 - 16636.062: 94.1888% ( 45) 00:08:17.259 16636.062 - 16736.886: 94.6490% ( 43) 00:08:17.259 16736.886 - 16837.711: 94.9593% ( 29) 00:08:17.259 16837.711 - 16938.535: 95.2483% ( 27) 00:08:17.259 16938.535 - 17039.360: 95.4944% ( 23) 00:08:17.259 17039.360 - 17140.185: 95.6764% ( 17) 00:08:17.259 17140.185 - 17241.009: 95.9332% ( 24) 00:08:17.259 17241.009 - 17341.834: 96.2008% ( 25) 00:08:17.260 17341.834 - 17442.658: 96.5111% ( 29) 00:08:17.260 17442.658 - 17543.483: 96.7359% ( 21) 00:08:17.260 17543.483 - 17644.308: 96.9285% ( 18) 00:08:17.260 17644.308 - 17745.132: 97.0783% ( 14) 00:08:17.260 17745.132 - 17845.957: 97.2282% ( 14) 00:08:17.260 17845.957 - 17946.782: 97.3887% ( 15) 00:08:17.260 17946.782 - 18047.606: 97.4529% ( 6) 00:08:17.260 18047.606 - 18148.431: 97.5385% ( 8) 00:08:17.260 18148.431 - 18249.255: 97.6134% ( 7) 00:08:17.260 18249.255 - 18350.080: 97.6991% ( 8) 00:08:17.260 18350.080 - 18450.905: 97.8382% ( 13) 00:08:17.260 18450.905 - 18551.729: 97.9880% ( 14) 00:08:17.260 18551.729 - 18652.554: 98.1164% ( 12) 00:08:17.260 18652.554 - 18753.378: 98.2128% ( 9) 00:08:17.260 18753.378 - 18854.203: 98.2770% ( 6) 00:08:17.260 18854.203 - 18955.028: 98.3412% ( 6) 00:08:17.260 18955.028 - 19055.852: 98.4054% ( 6) 00:08:17.260 19055.852 - 19156.677: 98.4803% ( 7) 00:08:17.260 19156.677 - 19257.502: 98.5445% ( 6) 00:08:17.260 19257.502 - 19358.326: 98.5980% ( 5) 00:08:17.260 19358.326 - 19459.151: 98.6301% ( 3) 00:08:17.260 21878.942 - 21979.766: 98.6515% ( 2) 00:08:17.260 21979.766 - 22080.591: 98.7158% ( 6) 00:08:17.260 22080.591 - 22181.415: 98.7907% ( 7) 00:08:17.260 22181.415 - 22282.240: 98.8656% ( 7) 00:08:17.260 22282.240 - 22383.065: 98.9298% ( 6) 00:08:17.260 22383.065 - 22483.889: 99.0047% ( 7) 00:08:17.260 22483.889 - 22584.714: 99.0689% ( 6) 00:08:17.260 22584.714 - 22685.538: 99.1438% ( 7) 00:08:17.260 22685.538 - 22786.363: 99.2080% ( 6) 00:08:17.260 22786.363 - 22887.188: 99.2723% ( 6) 00:08:17.260 22887.188 - 22988.012: 99.3151% ( 4) 00:08:17.260 28634.191 - 28835.840: 99.3258% ( 1) 00:08:17.260 28835.840 - 29037.489: 99.4542% ( 12) 00:08:17.260 29037.489 - 29239.138: 99.5826% ( 12) 00:08:17.260 29239.138 - 29440.788: 99.7217% ( 13) 00:08:17.260 29440.788 - 29642.437: 99.8502% ( 12) 00:08:17.260 29642.437 - 29844.086: 99.9786% ( 12) 00:08:17.260 29844.086 - 30045.735: 100.0000% ( 2) 00:08:17.260 00:08:17.260 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:17.260 ============================================================================== 00:08:17.260 Range in us Cumulative IO count 00:08:17.260 4436.283 - 4461.489: 0.0106% ( 1) 00:08:17.260 4461.489 - 4486.695: 0.0319% ( 2) 00:08:17.260 4486.695 - 4511.902: 0.0638% ( 3) 00:08:17.260 4511.902 - 4537.108: 0.0850% ( 2) 00:08:17.260 4537.108 - 4562.314: 0.1063% ( 2) 00:08:17.260 4562.314 - 4587.520: 0.1382% ( 3) 00:08:17.260 4587.520 - 4612.726: 0.1594% ( 2) 00:08:17.260 4612.726 - 4637.932: 0.1807% ( 2) 00:08:17.260 4637.932 - 4663.138: 0.2020% ( 2) 00:08:17.260 4663.138 - 4688.345: 0.2338% ( 3) 00:08:17.260 4688.345 - 4713.551: 0.2551% ( 2) 00:08:17.260 4713.551 - 4738.757: 0.2764% ( 2) 00:08:17.260 4738.757 - 4763.963: 0.2976% ( 2) 00:08:17.260 4763.963 - 4789.169: 0.3295% ( 3) 00:08:17.260 4789.169 - 4814.375: 0.3508% ( 2) 00:08:17.260 4814.375 - 4839.582: 0.3720% ( 2) 00:08:17.260 4839.582 - 4864.788: 0.4039% ( 3) 00:08:17.260 4864.788 - 4889.994: 0.4252% ( 2) 00:08:17.260 4889.994 - 4915.200: 0.4464% ( 2) 00:08:17.260 4915.200 - 4940.406: 0.4783% ( 3) 00:08:17.260 4940.406 - 4965.612: 0.4996% ( 2) 00:08:17.260 4965.612 - 4990.818: 0.5208% ( 2) 00:08:17.260 4990.818 - 5016.025: 0.5527% ( 3) 00:08:17.260 5016.025 - 5041.231: 0.5740% ( 2) 00:08:17.260 5041.231 - 5066.437: 0.5952% ( 2) 00:08:17.260 5066.437 - 5091.643: 0.6165% ( 2) 00:08:17.260 5091.643 - 5116.849: 0.6484% ( 3) 00:08:17.260 5116.849 - 5142.055: 0.6696% ( 2) 00:08:17.260 5142.055 - 5167.262: 0.6803% ( 1) 00:08:17.260 9830.400 - 9880.812: 0.7122% ( 3) 00:08:17.260 9880.812 - 9931.225: 0.8397% ( 12) 00:08:17.260 9931.225 - 9981.637: 0.9035% ( 6) 00:08:17.260 9981.637 - 10032.049: 0.9354% ( 3) 00:08:17.260 10032.049 - 10082.462: 0.9673% ( 3) 00:08:17.260 10082.462 - 10132.874: 1.0204% ( 5) 00:08:17.260 10132.874 - 10183.286: 1.0629% ( 4) 00:08:17.260 10183.286 - 10233.698: 1.1161% ( 5) 00:08:17.260 10233.698 - 10284.111: 1.1692% ( 5) 00:08:17.260 10284.111 - 10334.523: 1.2117% ( 4) 00:08:17.260 10334.523 - 10384.935: 1.2543% ( 4) 00:08:17.260 10384.935 - 10435.348: 1.2968% ( 4) 00:08:17.260 10435.348 - 10485.760: 1.3499% ( 5) 00:08:17.260 10485.760 - 10536.172: 1.3605% ( 1) 00:08:17.260 10737.822 - 10788.234: 1.4137% ( 5) 00:08:17.260 10788.234 - 10838.646: 1.4668% ( 5) 00:08:17.260 10838.646 - 10889.058: 1.5306% ( 6) 00:08:17.260 10889.058 - 10939.471: 1.6369% ( 10) 00:08:17.260 10939.471 - 10989.883: 1.7538% ( 11) 00:08:17.260 10989.883 - 11040.295: 1.8920% ( 13) 00:08:17.260 11040.295 - 11090.708: 2.0727% ( 17) 00:08:17.260 11090.708 - 11141.120: 2.3597% ( 27) 00:08:17.260 11141.120 - 11191.532: 2.7955% ( 41) 00:08:17.260 11191.532 - 11241.945: 3.2844% ( 46) 00:08:17.260 11241.945 - 11292.357: 3.7628% ( 45) 00:08:17.260 11292.357 - 11342.769: 4.2304% ( 44) 00:08:17.260 11342.769 - 11393.182: 4.7938% ( 53) 00:08:17.260 11393.182 - 11443.594: 5.4741% ( 64) 00:08:17.260 11443.594 - 11494.006: 6.2394% ( 72) 00:08:17.260 11494.006 - 11544.418: 7.1641% ( 87) 00:08:17.260 11544.418 - 11594.831: 7.9507% ( 74) 00:08:17.260 11594.831 - 11645.243: 9.0242% ( 101) 00:08:17.260 11645.243 - 11695.655: 10.1190% ( 103) 00:08:17.260 11695.655 - 11746.068: 11.2457% ( 106) 00:08:17.260 11746.068 - 11796.480: 12.5106% ( 119) 00:08:17.260 11796.480 - 11846.892: 13.5948% ( 102) 00:08:17.260 11846.892 - 11897.305: 14.8810% ( 121) 00:08:17.260 11897.305 - 11947.717: 16.1033% ( 115) 00:08:17.260 11947.717 - 11998.129: 17.3469% ( 117) 00:08:17.260 11998.129 - 12048.542: 18.5799% ( 116) 00:08:17.260 12048.542 - 12098.954: 19.8023% ( 115) 00:08:17.260 12098.954 - 12149.366: 20.9609% ( 109) 00:08:17.260 12149.366 - 12199.778: 22.1832% ( 115) 00:08:17.260 12199.778 - 12250.191: 23.3631% ( 111) 00:08:17.260 12250.191 - 12300.603: 24.5748% ( 114) 00:08:17.260 12300.603 - 12351.015: 25.8185% ( 117) 00:08:17.260 12351.015 - 12401.428: 26.9877% ( 110) 00:08:17.260 12401.428 - 12451.840: 28.1569% ( 110) 00:08:17.260 12451.840 - 12502.252: 29.1454% ( 93) 00:08:17.260 12502.252 - 12552.665: 30.2083% ( 100) 00:08:17.260 12552.665 - 12603.077: 31.1862% ( 92) 00:08:17.260 12603.077 - 12653.489: 32.2598% ( 101) 00:08:17.260 12653.489 - 12703.902: 33.3121% ( 99) 00:08:17.260 12703.902 - 12754.314: 34.3750% ( 100) 00:08:17.260 12754.314 - 12804.726: 35.5017% ( 106) 00:08:17.260 12804.726 - 12855.138: 36.6284% ( 106) 00:08:17.260 12855.138 - 12905.551: 37.7232% ( 103) 00:08:17.260 12905.551 - 13006.375: 40.1892% ( 232) 00:08:17.260 13006.375 - 13107.200: 42.5064% ( 218) 00:08:17.260 13107.200 - 13208.025: 44.8129% ( 217) 00:08:17.260 13208.025 - 13308.849: 46.9919% ( 205) 00:08:17.260 13308.849 - 13409.674: 49.2028% ( 208) 00:08:17.260 13409.674 - 13510.498: 51.3499% ( 202) 00:08:17.260 13510.498 - 13611.323: 53.7096% ( 222) 00:08:17.260 13611.323 - 13712.148: 56.0268% ( 218) 00:08:17.260 13712.148 - 13812.972: 58.4077% ( 224) 00:08:17.260 13812.972 - 13913.797: 60.4273% ( 190) 00:08:17.260 13913.797 - 14014.622: 62.2236% ( 169) 00:08:17.260 14014.622 - 14115.446: 64.2538% ( 191) 00:08:17.260 14115.446 - 14216.271: 66.2734% ( 190) 00:08:17.260 14216.271 - 14317.095: 68.3780% ( 198) 00:08:17.260 14317.095 - 14417.920: 70.5570% ( 205) 00:08:17.260 14417.920 - 14518.745: 72.6403% ( 196) 00:08:17.260 14518.745 - 14619.569: 74.1603% ( 143) 00:08:17.260 14619.569 - 14720.394: 75.7759% ( 152) 00:08:17.260 14720.394 - 14821.218: 77.3597% ( 149) 00:08:17.260 14821.218 - 14922.043: 79.0072% ( 155) 00:08:17.260 14922.043 - 15022.868: 80.7717% ( 166) 00:08:17.260 15022.868 - 15123.692: 82.3342% ( 147) 00:08:17.260 15123.692 - 15224.517: 83.5991% ( 119) 00:08:17.260 15224.517 - 15325.342: 84.5982% ( 94) 00:08:17.260 15325.342 - 15426.166: 85.5548% ( 90) 00:08:17.261 15426.166 - 15526.991: 86.4796% ( 87) 00:08:17.261 15526.991 - 15627.815: 87.4043% ( 87) 00:08:17.261 15627.815 - 15728.640: 88.2122% ( 76) 00:08:17.261 15728.640 - 15829.465: 88.9668% ( 71) 00:08:17.261 15829.465 - 15930.289: 89.6577% ( 65) 00:08:17.261 15930.289 - 16031.114: 90.3168% ( 62) 00:08:17.261 16031.114 - 16131.938: 91.0395% ( 68) 00:08:17.261 16131.938 - 16232.763: 91.7836% ( 70) 00:08:17.261 16232.763 - 16333.588: 92.4745% ( 65) 00:08:17.261 16333.588 - 16434.412: 93.1016% ( 59) 00:08:17.261 16434.412 - 16535.237: 93.6437% ( 51) 00:08:17.261 16535.237 - 16636.062: 94.1433% ( 47) 00:08:17.261 16636.062 - 16736.886: 94.6641% ( 49) 00:08:17.261 16736.886 - 16837.711: 95.1318% ( 44) 00:08:17.261 16837.711 - 16938.535: 95.6101% ( 45) 00:08:17.261 16938.535 - 17039.360: 96.0140% ( 38) 00:08:17.261 17039.360 - 17140.185: 96.3648% ( 33) 00:08:17.261 17140.185 - 17241.009: 96.8537% ( 46) 00:08:17.261 17241.009 - 17341.834: 97.2683% ( 39) 00:08:17.261 17341.834 - 17442.658: 97.6403% ( 35) 00:08:17.261 17442.658 - 17543.483: 97.9486% ( 29) 00:08:17.261 17543.483 - 17644.308: 98.1505% ( 19) 00:08:17.261 17644.308 - 17745.132: 98.2674% ( 11) 00:08:17.261 17745.132 - 17845.957: 98.3737% ( 10) 00:08:17.261 17845.957 - 17946.782: 98.4694% ( 9) 00:08:17.261 17946.782 - 18047.606: 98.5757% ( 10) 00:08:17.261 18047.606 - 18148.431: 98.6288% ( 5) 00:08:17.261 18148.431 - 18249.255: 98.6820% ( 5) 00:08:17.261 18249.255 - 18350.080: 98.7351% ( 5) 00:08:17.261 18350.080 - 18450.905: 98.8095% ( 7) 00:08:17.261 18450.905 - 18551.729: 98.8627% ( 5) 00:08:17.261 18551.729 - 18652.554: 98.9264% ( 6) 00:08:17.261 18652.554 - 18753.378: 98.9796% ( 5) 00:08:17.261 18753.378 - 18854.203: 99.0434% ( 6) 00:08:17.261 18854.203 - 18955.028: 99.0965% ( 5) 00:08:17.261 18955.028 - 19055.852: 99.1497% ( 5) 00:08:17.261 19055.852 - 19156.677: 99.2028% ( 5) 00:08:17.261 19156.677 - 19257.502: 99.2772% ( 7) 00:08:17.261 19257.502 - 19358.326: 99.3197% ( 4) 00:08:17.261 22181.415 - 22282.240: 99.3729% ( 5) 00:08:17.261 22282.240 - 22383.065: 99.4366% ( 6) 00:08:17.261 22383.065 - 22483.889: 99.5111% ( 7) 00:08:17.261 22483.889 - 22584.714: 99.5748% ( 6) 00:08:17.261 22584.714 - 22685.538: 99.6492% ( 7) 00:08:17.261 22685.538 - 22786.363: 99.7130% ( 6) 00:08:17.261 22786.363 - 22887.188: 99.7874% ( 7) 00:08:17.261 22887.188 - 22988.012: 99.8618% ( 7) 00:08:17.261 22988.012 - 23088.837: 99.9362% ( 7) 00:08:17.261 23088.837 - 23189.662: 100.0000% ( 6) 00:08:17.261 00:08:17.261 11:39:30 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:18.206 Initializing NVMe Controllers 00:08:18.207 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:18.207 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:18.207 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:18.207 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:18.207 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:18.207 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:18.207 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:18.207 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:18.207 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:18.207 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:18.207 Initialization complete. Launching workers. 00:08:18.207 ======================================================== 00:08:18.207 Latency(us) 00:08:18.207 Device Information : IOPS MiB/s Average min max 00:08:18.207 PCIE (0000:00:10.0) NSID 1 from core 0: 14247.91 166.97 8986.27 5725.71 28227.39 00:08:18.207 PCIE (0000:00:11.0) NSID 1 from core 0: 14247.91 166.97 8979.53 5428.07 28124.47 00:08:18.207 PCIE (0000:00:13.0) NSID 1 from core 0: 14247.91 166.97 8972.45 4427.44 27894.46 00:08:18.207 PCIE (0000:00:12.0) NSID 1 from core 0: 14247.91 166.97 8965.15 4163.33 27432.91 00:08:18.207 PCIE (0000:00:12.0) NSID 2 from core 0: 14247.91 166.97 8957.84 3781.79 27044.97 00:08:18.207 PCIE (0000:00:12.0) NSID 3 from core 0: 14247.91 166.97 8950.52 3542.57 26559.50 00:08:18.207 ======================================================== 00:08:18.207 Total : 85487.44 1001.81 8968.63 3542.57 28227.39 00:08:18.207 00:08:18.207 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:18.207 ================================================================================= 00:08:18.207 1.00000% : 7007.311us 00:08:18.207 10.00000% : 8015.557us 00:08:18.207 25.00000% : 8318.031us 00:08:18.207 50.00000% : 8670.917us 00:08:18.207 75.00000% : 9124.628us 00:08:18.207 90.00000% : 9981.637us 00:08:18.207 95.00000% : 11393.182us 00:08:18.207 98.00000% : 13712.148us 00:08:18.207 99.00000% : 16333.588us 00:08:18.207 99.50000% : 19358.326us 00:08:18.207 99.90000% : 27827.594us 00:08:18.207 99.99000% : 28230.892us 00:08:18.207 99.99900% : 28230.892us 00:08:18.207 99.99990% : 28230.892us 00:08:18.207 99.99999% : 28230.892us 00:08:18.207 00:08:18.207 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:18.207 ================================================================================= 00:08:18.207 1.00000% : 7057.723us 00:08:18.207 10.00000% : 8166.794us 00:08:18.207 25.00000% : 8368.443us 00:08:18.207 50.00000% : 8620.505us 00:08:18.207 75.00000% : 9023.803us 00:08:18.207 90.00000% : 9931.225us 00:08:18.207 95.00000% : 11342.769us 00:08:18.207 98.00000% : 13510.498us 00:08:18.207 99.00000% : 16031.114us 00:08:18.207 99.50000% : 19660.800us 00:08:18.207 99.90000% : 27827.594us 00:08:18.207 99.99000% : 28230.892us 00:08:18.207 99.99900% : 28230.892us 00:08:18.207 99.99990% : 28230.892us 00:08:18.207 99.99999% : 28230.892us 00:08:18.207 00:08:18.207 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:18.207 ================================================================================= 00:08:18.207 1.00000% : 7259.372us 00:08:18.207 10.00000% : 8116.382us 00:08:18.207 25.00000% : 8368.443us 00:08:18.207 50.00000% : 8670.917us 00:08:18.207 75.00000% : 9023.803us 00:08:18.207 90.00000% : 9931.225us 00:08:18.207 95.00000% : 11342.769us 00:08:18.207 98.00000% : 13611.323us 00:08:18.207 99.00000% : 15728.640us 00:08:18.207 99.50000% : 20064.098us 00:08:18.207 99.90000% : 27625.945us 00:08:18.207 99.99000% : 28029.243us 00:08:18.207 99.99900% : 28029.243us 00:08:18.207 99.99990% : 28029.243us 00:08:18.207 99.99999% : 28029.243us 00:08:18.207 00:08:18.207 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:18.207 ================================================================================= 00:08:18.207 1.00000% : 6906.486us 00:08:18.207 10.00000% : 8116.382us 00:08:18.207 25.00000% : 8368.443us 00:08:18.207 50.00000% : 8620.505us 00:08:18.207 75.00000% : 9023.803us 00:08:18.207 90.00000% : 9880.812us 00:08:18.207 95.00000% : 11191.532us 00:08:18.207 98.00000% : 14014.622us 00:08:18.207 99.00000% : 15325.342us 00:08:18.207 99.50000% : 20064.098us 00:08:18.207 99.90000% : 27222.646us 00:08:18.207 99.99000% : 27424.295us 00:08:18.207 99.99900% : 27625.945us 00:08:18.207 99.99990% : 27625.945us 00:08:18.207 99.99999% : 27625.945us 00:08:18.207 00:08:18.207 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:18.207 ================================================================================= 00:08:18.207 1.00000% : 6906.486us 00:08:18.207 10.00000% : 8116.382us 00:08:18.207 25.00000% : 8368.443us 00:08:18.207 50.00000% : 8670.917us 00:08:18.207 75.00000% : 9023.803us 00:08:18.207 90.00000% : 9880.812us 00:08:18.207 95.00000% : 11090.708us 00:08:18.207 98.00000% : 14014.622us 00:08:18.207 99.00000% : 15728.640us 00:08:18.207 99.50000% : 20265.748us 00:08:18.207 99.90000% : 26819.348us 00:08:18.207 99.99000% : 27222.646us 00:08:18.207 99.99900% : 27222.646us 00:08:18.207 99.99990% : 27222.646us 00:08:18.207 99.99999% : 27222.646us 00:08:18.207 00:08:18.207 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:18.207 ================================================================================= 00:08:18.207 1.00000% : 6906.486us 00:08:18.207 10.00000% : 8116.382us 00:08:18.207 25.00000% : 8368.443us 00:08:18.207 50.00000% : 8670.917us 00:08:18.207 75.00000% : 9023.803us 00:08:18.207 90.00000% : 9830.400us 00:08:18.207 95.00000% : 10889.058us 00:08:18.207 98.00000% : 13510.498us 00:08:18.207 99.00000% : 15829.465us 00:08:18.207 99.50000% : 20568.222us 00:08:18.207 99.90000% : 26416.049us 00:08:18.207 99.99000% : 26617.698us 00:08:18.207 99.99900% : 26617.698us 00:08:18.207 99.99990% : 26617.698us 00:08:18.207 99.99999% : 26617.698us 00:08:18.207 00:08:18.207 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:18.207 ============================================================================== 00:08:18.207 Range in us Cumulative IO count 00:08:18.207 5721.797 - 5747.003: 0.0490% ( 7) 00:08:18.207 5747.003 - 5772.209: 0.0561% ( 1) 00:08:18.207 5772.209 - 5797.415: 0.0701% ( 2) 00:08:18.207 5797.415 - 5822.622: 0.0841% ( 2) 00:08:18.207 5822.622 - 5847.828: 0.0981% ( 2) 00:08:18.207 5847.828 - 5873.034: 0.1121% ( 2) 00:08:18.207 5873.034 - 5898.240: 0.1471% ( 5) 00:08:18.207 5898.240 - 5923.446: 0.1892% ( 6) 00:08:18.207 5923.446 - 5948.652: 0.2312% ( 6) 00:08:18.207 5948.652 - 5973.858: 0.2522% ( 3) 00:08:18.207 5973.858 - 5999.065: 0.2803% ( 4) 00:08:18.207 5999.065 - 6024.271: 0.3013% ( 3) 00:08:18.207 6024.271 - 6049.477: 0.3223% ( 3) 00:08:18.207 6049.477 - 6074.683: 0.3293% ( 1) 00:08:18.207 6150.302 - 6175.508: 0.3433% ( 2) 00:08:18.207 6175.508 - 6200.714: 0.3503% ( 1) 00:08:18.207 6200.714 - 6225.920: 0.3573% ( 1) 00:08:18.207 6225.920 - 6251.126: 0.3714% ( 2) 00:08:18.207 6251.126 - 6276.332: 0.3854% ( 2) 00:08:18.207 6276.332 - 6301.538: 0.3994% ( 2) 00:08:18.207 6301.538 - 6326.745: 0.4064% ( 1) 00:08:18.207 6326.745 - 6351.951: 0.4204% ( 2) 00:08:18.207 6351.951 - 6377.157: 0.4344% ( 2) 00:08:18.207 6377.157 - 6402.363: 0.4414% ( 1) 00:08:18.207 6402.363 - 6427.569: 0.4484% ( 1) 00:08:18.207 6553.600 - 6604.012: 0.4695% ( 3) 00:08:18.207 6604.012 - 6654.425: 0.5045% ( 5) 00:08:18.207 6654.425 - 6704.837: 0.5605% ( 8) 00:08:18.207 6755.249 - 6805.662: 0.5956% ( 5) 00:08:18.207 6805.662 - 6856.074: 0.6797% ( 12) 00:08:18.207 6856.074 - 6906.486: 0.8128% ( 19) 00:08:18.207 6906.486 - 6956.898: 0.9249% ( 16) 00:08:18.207 6956.898 - 7007.311: 1.0230% ( 14) 00:08:18.207 7007.311 - 7057.723: 1.1351% ( 16) 00:08:18.207 7057.723 - 7108.135: 1.2682% ( 19) 00:08:18.207 7108.135 - 7158.548: 1.3243% ( 8) 00:08:18.207 7158.548 - 7208.960: 1.4013% ( 11) 00:08:18.207 7208.960 - 7259.372: 1.4364% ( 5) 00:08:18.207 7259.372 - 7309.785: 1.4994% ( 9) 00:08:18.207 7309.785 - 7360.197: 1.6186% ( 17) 00:08:18.207 7360.197 - 7410.609: 1.6886% ( 10) 00:08:18.207 7410.609 - 7461.022: 1.7937% ( 15) 00:08:18.207 7461.022 - 7511.434: 1.9409% ( 21) 00:08:18.207 7511.434 - 7561.846: 2.0179% ( 11) 00:08:18.207 7561.846 - 7612.258: 2.1441% ( 18) 00:08:18.207 7612.258 - 7662.671: 2.3052% ( 23) 00:08:18.207 7662.671 - 7713.083: 2.5645% ( 37) 00:08:18.207 7713.083 - 7763.495: 3.0549% ( 70) 00:08:18.207 7763.495 - 7813.908: 3.7696% ( 102) 00:08:18.207 7813.908 - 7864.320: 4.7786% ( 144) 00:08:18.207 7864.320 - 7914.732: 6.1939% ( 202) 00:08:18.207 7914.732 - 7965.145: 8.2189% ( 289) 00:08:18.207 7965.145 - 8015.557: 10.6362% ( 345) 00:08:18.207 8015.557 - 8065.969: 13.0045% ( 338) 00:08:18.207 8065.969 - 8116.382: 15.0364% ( 290) 00:08:18.207 8116.382 - 8166.794: 17.6710% ( 376) 00:08:18.207 8166.794 - 8217.206: 20.5157% ( 406) 00:08:18.207 8217.206 - 8267.618: 23.4936% ( 425) 00:08:18.207 8267.618 - 8318.031: 26.6466% ( 450) 00:08:18.207 8318.031 - 8368.443: 30.1710% ( 503) 00:08:18.207 8368.443 - 8418.855: 33.9896% ( 545) 00:08:18.207 8418.855 - 8469.268: 37.7172% ( 532) 00:08:18.207 8469.268 - 8519.680: 41.6129% ( 556) 00:08:18.207 8519.680 - 8570.092: 45.4596% ( 549) 00:08:18.207 8570.092 - 8620.505: 49.4184% ( 565) 00:08:18.208 8620.505 - 8670.917: 52.9849% ( 509) 00:08:18.208 8670.917 - 8721.329: 56.2990% ( 473) 00:08:18.208 8721.329 - 8771.742: 59.3960% ( 442) 00:08:18.208 8771.742 - 8822.154: 62.6612% ( 466) 00:08:18.208 8822.154 - 8872.566: 65.7651% ( 443) 00:08:18.208 8872.566 - 8922.978: 68.3786% ( 373) 00:08:18.208 8922.978 - 8973.391: 70.5367% ( 308) 00:08:18.208 8973.391 - 9023.803: 72.5897% ( 293) 00:08:18.208 9023.803 - 9074.215: 74.7408% ( 307) 00:08:18.208 9074.215 - 9124.628: 76.4644% ( 246) 00:08:18.208 9124.628 - 9175.040: 78.3002% ( 262) 00:08:18.208 9175.040 - 9225.452: 79.8907% ( 227) 00:08:18.208 9225.452 - 9275.865: 81.4112% ( 217) 00:08:18.208 9275.865 - 9326.277: 82.5182% ( 158) 00:08:18.208 9326.277 - 9376.689: 83.5902% ( 153) 00:08:18.208 9376.689 - 9427.102: 84.4661% ( 125) 00:08:18.208 9427.102 - 9477.514: 85.4120% ( 135) 00:08:18.208 9477.514 - 9527.926: 86.1127% ( 100) 00:08:18.208 9527.926 - 9578.338: 86.7713% ( 94) 00:08:18.208 9578.338 - 9628.751: 87.4019% ( 90) 00:08:18.208 9628.751 - 9679.163: 87.9344% ( 76) 00:08:18.208 9679.163 - 9729.575: 88.3899% ( 65) 00:08:18.208 9729.575 - 9779.988: 88.8313% ( 63) 00:08:18.208 9779.988 - 9830.400: 89.2657% ( 62) 00:08:18.208 9830.400 - 9880.812: 89.6511% ( 55) 00:08:18.208 9880.812 - 9931.225: 89.9734% ( 46) 00:08:18.208 9931.225 - 9981.637: 90.3307% ( 51) 00:08:18.208 9981.637 - 10032.049: 90.7231% ( 56) 00:08:18.208 10032.049 - 10082.462: 91.0945% ( 53) 00:08:18.208 10082.462 - 10132.874: 91.4728% ( 54) 00:08:18.208 10132.874 - 10183.286: 91.7601% ( 41) 00:08:18.208 10183.286 - 10233.698: 92.0474% ( 41) 00:08:18.208 10233.698 - 10284.111: 92.2506% ( 29) 00:08:18.208 10284.111 - 10334.523: 92.3697% ( 17) 00:08:18.208 10334.523 - 10384.935: 92.5378% ( 24) 00:08:18.208 10384.935 - 10435.348: 92.6640% ( 18) 00:08:18.208 10435.348 - 10485.760: 92.8812% ( 31) 00:08:18.208 10485.760 - 10536.172: 93.1404% ( 37) 00:08:18.208 10536.172 - 10586.585: 93.4067% ( 38) 00:08:18.208 10586.585 - 10636.997: 93.5258% ( 17) 00:08:18.208 10636.997 - 10687.409: 93.7010% ( 25) 00:08:18.208 10687.409 - 10737.822: 93.8411% ( 20) 00:08:18.208 10737.822 - 10788.234: 93.9392% ( 14) 00:08:18.208 10788.234 - 10838.646: 94.0163% ( 11) 00:08:18.208 10838.646 - 10889.058: 94.1003% ( 12) 00:08:18.208 10889.058 - 10939.471: 94.2124% ( 16) 00:08:18.208 10939.471 - 10989.883: 94.3035% ( 13) 00:08:18.208 10989.883 - 11040.295: 94.3876% ( 12) 00:08:18.208 11040.295 - 11090.708: 94.4927% ( 15) 00:08:18.208 11090.708 - 11141.120: 94.6609% ( 24) 00:08:18.208 11141.120 - 11191.532: 94.7520% ( 13) 00:08:18.208 11191.532 - 11241.945: 94.8501% ( 14) 00:08:18.208 11241.945 - 11292.357: 94.9201% ( 10) 00:08:18.208 11292.357 - 11342.769: 94.9832% ( 9) 00:08:18.208 11342.769 - 11393.182: 95.0322% ( 7) 00:08:18.208 11393.182 - 11443.594: 95.0953% ( 9) 00:08:18.208 11443.594 - 11494.006: 95.1934% ( 14) 00:08:18.208 11494.006 - 11544.418: 95.2775% ( 12) 00:08:18.208 11544.418 - 11594.831: 95.3195% ( 6) 00:08:18.208 11594.831 - 11645.243: 95.3686% ( 7) 00:08:18.208 11645.243 - 11695.655: 95.4106% ( 6) 00:08:18.208 11695.655 - 11746.068: 95.4947% ( 12) 00:08:18.208 11746.068 - 11796.480: 95.5507% ( 8) 00:08:18.208 11796.480 - 11846.892: 95.6418% ( 13) 00:08:18.208 11846.892 - 11897.305: 95.6979% ( 8) 00:08:18.208 11897.305 - 11947.717: 95.7820% ( 12) 00:08:18.208 11947.717 - 11998.129: 95.8941% ( 16) 00:08:18.208 11998.129 - 12048.542: 95.9431% ( 7) 00:08:18.208 12048.542 - 12098.954: 96.0342% ( 13) 00:08:18.208 12098.954 - 12149.366: 96.1043% ( 10) 00:08:18.208 12149.366 - 12199.778: 96.1883% ( 12) 00:08:18.208 12199.778 - 12250.191: 96.2374% ( 7) 00:08:18.208 12250.191 - 12300.603: 96.3215% ( 12) 00:08:18.208 12300.603 - 12351.015: 96.3845% ( 9) 00:08:18.208 12351.015 - 12401.428: 96.4546% ( 10) 00:08:18.208 12401.428 - 12451.840: 96.5387% ( 12) 00:08:18.208 12451.840 - 12502.252: 96.6508% ( 16) 00:08:18.208 12502.252 - 12552.665: 96.7629% ( 16) 00:08:18.208 12552.665 - 12603.077: 96.8750% ( 16) 00:08:18.208 12603.077 - 12653.489: 96.9311% ( 8) 00:08:18.208 12653.489 - 12703.902: 97.0572% ( 18) 00:08:18.208 12703.902 - 12754.314: 97.1132% ( 8) 00:08:18.208 12754.314 - 12804.726: 97.1693% ( 8) 00:08:18.208 12804.726 - 12855.138: 97.2674% ( 14) 00:08:18.208 12855.138 - 12905.551: 97.3234% ( 8) 00:08:18.208 12905.551 - 13006.375: 97.4496% ( 18) 00:08:18.208 13006.375 - 13107.200: 97.6037% ( 22) 00:08:18.208 13107.200 - 13208.025: 97.6668% ( 9) 00:08:18.208 13208.025 - 13308.849: 97.7649% ( 14) 00:08:18.208 13308.849 - 13409.674: 97.8349% ( 10) 00:08:18.208 13409.674 - 13510.498: 97.8980% ( 9) 00:08:18.208 13510.498 - 13611.323: 97.9821% ( 12) 00:08:18.208 13611.323 - 13712.148: 98.0451% ( 9) 00:08:18.208 13712.148 - 13812.972: 98.1152% ( 10) 00:08:18.208 13812.972 - 13913.797: 98.1853% ( 10) 00:08:18.208 13913.797 - 14014.622: 98.2623% ( 11) 00:08:18.208 14014.622 - 14115.446: 98.3814% ( 17) 00:08:18.208 14115.446 - 14216.271: 98.4305% ( 7) 00:08:18.208 14216.271 - 14317.095: 98.4725% ( 6) 00:08:18.208 14317.095 - 14417.920: 98.4795% ( 1) 00:08:18.208 14518.745 - 14619.569: 98.5006% ( 3) 00:08:18.208 14619.569 - 14720.394: 98.5216% ( 3) 00:08:18.208 14720.394 - 14821.218: 98.5426% ( 3) 00:08:18.208 14821.218 - 14922.043: 98.5636% ( 3) 00:08:18.208 14922.043 - 15022.868: 98.5916% ( 4) 00:08:18.208 15022.868 - 15123.692: 98.6127% ( 3) 00:08:18.208 15123.692 - 15224.517: 98.6407% ( 4) 00:08:18.208 15224.517 - 15325.342: 98.6617% ( 3) 00:08:18.208 15325.342 - 15426.166: 98.6827% ( 3) 00:08:18.208 15426.166 - 15526.991: 98.7248% ( 6) 00:08:18.208 15526.991 - 15627.815: 98.7458% ( 3) 00:08:18.208 15627.815 - 15728.640: 98.8018% ( 8) 00:08:18.208 15728.640 - 15829.465: 98.8229% ( 3) 00:08:18.208 15829.465 - 15930.289: 98.8649% ( 6) 00:08:18.208 15930.289 - 16031.114: 98.8719% ( 1) 00:08:18.208 16031.114 - 16131.938: 98.9490% ( 11) 00:08:18.208 16131.938 - 16232.763: 98.9840% ( 5) 00:08:18.208 16232.763 - 16333.588: 99.0261% ( 6) 00:08:18.208 16333.588 - 16434.412: 99.0681% ( 6) 00:08:18.208 16434.412 - 16535.237: 99.1031% ( 5) 00:08:18.208 17543.483 - 17644.308: 99.1101% ( 1) 00:08:18.208 17644.308 - 17745.132: 99.1172% ( 1) 00:08:18.208 17745.132 - 17845.957: 99.1522% ( 5) 00:08:18.208 17845.957 - 17946.782: 99.2012% ( 7) 00:08:18.208 18047.606 - 18148.431: 99.2363% ( 5) 00:08:18.208 18148.431 - 18249.255: 99.2643% ( 4) 00:08:18.208 18249.255 - 18350.080: 99.2853% ( 3) 00:08:18.208 18350.080 - 18450.905: 99.3344% ( 7) 00:08:18.208 18450.905 - 18551.729: 99.3624% ( 4) 00:08:18.208 18652.554 - 18753.378: 99.3694% ( 1) 00:08:18.208 18753.378 - 18854.203: 99.3834% ( 2) 00:08:18.208 18854.203 - 18955.028: 99.4114% ( 4) 00:08:18.208 18955.028 - 19055.852: 99.4325% ( 3) 00:08:18.208 19055.852 - 19156.677: 99.4535% ( 3) 00:08:18.208 19156.677 - 19257.502: 99.4885% ( 5) 00:08:18.208 19257.502 - 19358.326: 99.5025% ( 2) 00:08:18.208 19358.326 - 19459.151: 99.5165% ( 2) 00:08:18.208 19459.151 - 19559.975: 99.5376% ( 3) 00:08:18.208 19559.975 - 19660.800: 99.5516% ( 2) 00:08:18.208 25811.102 - 26012.751: 99.5586% ( 1) 00:08:18.208 26012.751 - 26214.400: 99.5936% ( 5) 00:08:18.208 26214.400 - 26416.049: 99.6567% ( 9) 00:08:18.208 26416.049 - 26617.698: 99.6847% ( 4) 00:08:18.208 26617.698 - 26819.348: 99.7337% ( 7) 00:08:18.208 26819.348 - 27020.997: 99.7688% ( 5) 00:08:18.208 27020.997 - 27222.646: 99.8178% ( 7) 00:08:18.208 27222.646 - 27424.295: 99.8248% ( 1) 00:08:18.208 27424.295 - 27625.945: 99.8669% ( 6) 00:08:18.208 27625.945 - 27827.594: 99.9089% ( 6) 00:08:18.208 27827.594 - 28029.243: 99.9510% ( 6) 00:08:18.208 28029.243 - 28230.892: 100.0000% ( 7) 00:08:18.208 00:08:18.208 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:18.208 ============================================================================== 00:08:18.208 Range in us Cumulative IO count 00:08:18.208 5419.323 - 5444.529: 0.0210% ( 3) 00:08:18.208 5444.529 - 5469.735: 0.0420% ( 3) 00:08:18.208 5469.735 - 5494.942: 0.0771% ( 5) 00:08:18.208 5494.942 - 5520.148: 0.1191% ( 6) 00:08:18.208 5520.148 - 5545.354: 0.1612% ( 6) 00:08:18.208 5545.354 - 5570.560: 0.2172% ( 8) 00:08:18.208 5570.560 - 5595.766: 0.2592% ( 6) 00:08:18.208 5595.766 - 5620.972: 0.2943% ( 5) 00:08:18.208 5620.972 - 5646.178: 0.3153% ( 3) 00:08:18.208 5646.178 - 5671.385: 0.3363% ( 3) 00:08:18.208 5671.385 - 5696.591: 0.3433% ( 1) 00:08:18.208 5696.591 - 5721.797: 0.3573% ( 2) 00:08:18.208 5721.797 - 5747.003: 0.3714% ( 2) 00:08:18.208 5747.003 - 5772.209: 0.3854% ( 2) 00:08:18.208 5772.209 - 5797.415: 0.3924% ( 1) 00:08:18.208 5797.415 - 5822.622: 0.4134% ( 3) 00:08:18.208 5822.622 - 5847.828: 0.4274% ( 2) 00:08:18.208 5847.828 - 5873.034: 0.4344% ( 1) 00:08:18.208 5873.034 - 5898.240: 0.4484% ( 2) 00:08:18.208 6503.188 - 6553.600: 0.4554% ( 1) 00:08:18.208 6704.837 - 6755.249: 0.4695% ( 2) 00:08:18.208 6755.249 - 6805.662: 0.4835% ( 2) 00:08:18.208 6805.662 - 6856.074: 0.5255% ( 6) 00:08:18.208 6856.074 - 6906.486: 0.8338% ( 44) 00:08:18.208 6906.486 - 6956.898: 0.9179% ( 12) 00:08:18.208 6956.898 - 7007.311: 0.9599% ( 6) 00:08:18.208 7007.311 - 7057.723: 1.0090% ( 7) 00:08:18.208 7057.723 - 7108.135: 1.1071% ( 14) 00:08:18.208 7108.135 - 7158.548: 1.1982% ( 13) 00:08:18.208 7158.548 - 7208.960: 1.4084% ( 30) 00:08:18.208 7208.960 - 7259.372: 1.5205% ( 16) 00:08:18.209 7259.372 - 7309.785: 1.5835% ( 9) 00:08:18.209 7309.785 - 7360.197: 1.6676% ( 12) 00:08:18.209 7360.197 - 7410.609: 1.7377% ( 10) 00:08:18.209 7410.609 - 7461.022: 1.8077% ( 10) 00:08:18.209 7461.022 - 7511.434: 1.8918% ( 12) 00:08:18.209 7511.434 - 7561.846: 2.1791% ( 41) 00:08:18.209 7561.846 - 7612.258: 2.2281% ( 7) 00:08:18.209 7612.258 - 7662.671: 2.2632% ( 5) 00:08:18.209 7662.671 - 7713.083: 2.3192% ( 8) 00:08:18.209 7713.083 - 7763.495: 2.3893% ( 10) 00:08:18.209 7763.495 - 7813.908: 2.5785% ( 27) 00:08:18.209 7813.908 - 7864.320: 2.9568% ( 54) 00:08:18.209 7864.320 - 7914.732: 3.6155% ( 94) 00:08:18.209 7914.732 - 7965.145: 4.3372% ( 103) 00:08:18.209 7965.145 - 8015.557: 5.6124% ( 182) 00:08:18.209 8015.557 - 8065.969: 7.6093% ( 285) 00:08:18.209 8065.969 - 8116.382: 9.9636% ( 336) 00:08:18.209 8116.382 - 8166.794: 12.6962% ( 390) 00:08:18.209 8166.794 - 8217.206: 15.9964% ( 471) 00:08:18.209 8217.206 - 8267.618: 19.6539% ( 522) 00:08:18.209 8267.618 - 8318.031: 23.9140% ( 608) 00:08:18.209 8318.031 - 8368.443: 28.4473% ( 647) 00:08:18.209 8368.443 - 8418.855: 32.8966% ( 635) 00:08:18.209 8418.855 - 8469.268: 37.2828% ( 626) 00:08:18.209 8469.268 - 8519.680: 41.7391% ( 636) 00:08:18.209 8519.680 - 8570.092: 46.1953% ( 636) 00:08:18.209 8570.092 - 8620.505: 50.5886% ( 627) 00:08:18.209 8620.505 - 8670.917: 55.1009% ( 644) 00:08:18.209 8670.917 - 8721.329: 59.3540% ( 607) 00:08:18.209 8721.329 - 8771.742: 62.9624% ( 515) 00:08:18.209 8771.742 - 8822.154: 66.2206% ( 465) 00:08:18.209 8822.154 - 8872.566: 69.2265% ( 429) 00:08:18.209 8872.566 - 8922.978: 71.6998% ( 353) 00:08:18.209 8922.978 - 8973.391: 73.8929% ( 313) 00:08:18.209 8973.391 - 9023.803: 75.8548% ( 280) 00:08:18.209 9023.803 - 9074.215: 77.7396% ( 269) 00:08:18.209 9074.215 - 9124.628: 79.1129% ( 196) 00:08:18.209 9124.628 - 9175.040: 80.4793% ( 195) 00:08:18.209 9175.040 - 9225.452: 81.5443% ( 152) 00:08:18.209 9225.452 - 9275.865: 82.5813% ( 148) 00:08:18.209 9275.865 - 9326.277: 83.7164% ( 162) 00:08:18.209 9326.277 - 9376.689: 84.6833% ( 138) 00:08:18.209 9376.689 - 9427.102: 85.5171% ( 119) 00:08:18.209 9427.102 - 9477.514: 86.1897% ( 96) 00:08:18.209 9477.514 - 9527.926: 86.8133% ( 89) 00:08:18.209 9527.926 - 9578.338: 87.4860% ( 96) 00:08:18.209 9578.338 - 9628.751: 88.0956% ( 87) 00:08:18.209 9628.751 - 9679.163: 88.5510% ( 65) 00:08:18.209 9679.163 - 9729.575: 88.9224% ( 53) 00:08:18.209 9729.575 - 9779.988: 89.2377% ( 45) 00:08:18.209 9779.988 - 9830.400: 89.5740% ( 48) 00:08:18.209 9830.400 - 9880.812: 89.8473% ( 39) 00:08:18.209 9880.812 - 9931.225: 90.0785% ( 33) 00:08:18.209 9931.225 - 9981.637: 90.3097% ( 33) 00:08:18.209 9981.637 - 10032.049: 90.6180% ( 44) 00:08:18.209 10032.049 - 10082.462: 90.9613% ( 49) 00:08:18.209 10082.462 - 10132.874: 91.3537% ( 56) 00:08:18.209 10132.874 - 10183.286: 91.6200% ( 38) 00:08:18.209 10183.286 - 10233.698: 91.8512% ( 33) 00:08:18.209 10233.698 - 10284.111: 92.1455% ( 42) 00:08:18.209 10284.111 - 10334.523: 92.3627% ( 31) 00:08:18.209 10334.523 - 10384.935: 92.5518% ( 27) 00:08:18.209 10384.935 - 10435.348: 92.7691% ( 31) 00:08:18.209 10435.348 - 10485.760: 93.0353% ( 38) 00:08:18.209 10485.760 - 10536.172: 93.2455% ( 30) 00:08:18.209 10536.172 - 10586.585: 93.4627% ( 31) 00:08:18.209 10586.585 - 10636.997: 93.6309% ( 24) 00:08:18.209 10636.997 - 10687.409: 93.7990% ( 24) 00:08:18.209 10687.409 - 10737.822: 93.9532% ( 22) 00:08:18.209 10737.822 - 10788.234: 94.0583% ( 15) 00:08:18.209 10788.234 - 10838.646: 94.1704% ( 16) 00:08:18.209 10838.646 - 10889.058: 94.2685% ( 14) 00:08:18.209 10889.058 - 10939.471: 94.3736% ( 15) 00:08:18.209 10939.471 - 10989.883: 94.4577% ( 12) 00:08:18.209 10989.883 - 11040.295: 94.5838% ( 18) 00:08:18.209 11040.295 - 11090.708: 94.6679% ( 12) 00:08:18.209 11090.708 - 11141.120: 94.7590% ( 13) 00:08:18.209 11141.120 - 11191.532: 94.8010% ( 6) 00:08:18.209 11191.532 - 11241.945: 94.8641% ( 9) 00:08:18.209 11241.945 - 11292.357: 94.9341% ( 10) 00:08:18.209 11292.357 - 11342.769: 95.0462% ( 16) 00:08:18.209 11342.769 - 11393.182: 95.1303% ( 12) 00:08:18.209 11393.182 - 11443.594: 95.2284% ( 14) 00:08:18.209 11443.594 - 11494.006: 95.3335% ( 15) 00:08:18.209 11494.006 - 11544.418: 95.4246% ( 13) 00:08:18.209 11544.418 - 11594.831: 95.5577% ( 19) 00:08:18.209 11594.831 - 11645.243: 95.6768% ( 17) 00:08:18.209 11645.243 - 11695.655: 95.7539% ( 11) 00:08:18.209 11695.655 - 11746.068: 95.8170% ( 9) 00:08:18.209 11746.068 - 11796.480: 95.9081% ( 13) 00:08:18.209 11796.480 - 11846.892: 96.0062% ( 14) 00:08:18.209 11846.892 - 11897.305: 96.1113% ( 15) 00:08:18.209 11897.305 - 11947.717: 96.2164% ( 15) 00:08:18.209 11947.717 - 11998.129: 96.3075% ( 13) 00:08:18.209 11998.129 - 12048.542: 96.3775% ( 10) 00:08:18.209 12048.542 - 12098.954: 96.4476% ( 10) 00:08:18.209 12098.954 - 12149.366: 96.4966% ( 7) 00:08:18.209 12149.366 - 12199.778: 96.5387% ( 6) 00:08:18.209 12199.778 - 12250.191: 96.6017% ( 9) 00:08:18.209 12250.191 - 12300.603: 96.6788% ( 11) 00:08:18.209 12300.603 - 12351.015: 96.7419% ( 9) 00:08:18.209 12351.015 - 12401.428: 96.8049% ( 9) 00:08:18.209 12401.428 - 12451.840: 96.8540% ( 7) 00:08:18.209 12451.840 - 12502.252: 96.9311% ( 11) 00:08:18.209 12502.252 - 12552.665: 96.9871% ( 8) 00:08:18.209 12552.665 - 12603.077: 97.0642% ( 11) 00:08:18.209 12603.077 - 12653.489: 97.1342% ( 10) 00:08:18.209 12653.489 - 12703.902: 97.1903% ( 8) 00:08:18.209 12703.902 - 12754.314: 97.2393% ( 7) 00:08:18.209 12754.314 - 12804.726: 97.3094% ( 10) 00:08:18.209 12804.726 - 12855.138: 97.3725% ( 9) 00:08:18.209 12855.138 - 12905.551: 97.4566% ( 12) 00:08:18.209 12905.551 - 13006.375: 97.5757% ( 17) 00:08:18.209 13006.375 - 13107.200: 97.6808% ( 15) 00:08:18.209 13107.200 - 13208.025: 97.7649% ( 12) 00:08:18.209 13208.025 - 13308.849: 97.8489% ( 12) 00:08:18.209 13308.849 - 13409.674: 97.9680% ( 17) 00:08:18.209 13409.674 - 13510.498: 98.0381% ( 10) 00:08:18.209 13510.498 - 13611.323: 98.0942% ( 8) 00:08:18.209 13611.323 - 13712.148: 98.1502% ( 8) 00:08:18.209 13712.148 - 13812.972: 98.2063% ( 8) 00:08:18.209 13812.972 - 13913.797: 98.2623% ( 8) 00:08:18.209 13913.797 - 14014.622: 98.3184% ( 8) 00:08:18.209 14014.622 - 14115.446: 98.3744% ( 8) 00:08:18.209 14115.446 - 14216.271: 98.4515% ( 11) 00:08:18.209 14216.271 - 14317.095: 98.5356% ( 12) 00:08:18.209 14317.095 - 14417.920: 98.5916% ( 8) 00:08:18.209 14417.920 - 14518.745: 98.6407% ( 7) 00:08:18.209 14518.745 - 14619.569: 98.6547% ( 2) 00:08:18.209 15627.815 - 15728.640: 98.6687% ( 2) 00:08:18.209 15728.640 - 15829.465: 98.7318% ( 9) 00:08:18.209 15829.465 - 15930.289: 98.8159% ( 12) 00:08:18.209 15930.289 - 16031.114: 99.0050% ( 27) 00:08:18.209 16031.114 - 16131.938: 99.0541% ( 7) 00:08:18.209 16131.938 - 16232.763: 99.1031% ( 7) 00:08:18.209 17946.782 - 18047.606: 99.1101% ( 1) 00:08:18.209 18047.606 - 18148.431: 99.1312% ( 3) 00:08:18.209 18148.431 - 18249.255: 99.1592% ( 4) 00:08:18.209 18249.255 - 18350.080: 99.1802% ( 3) 00:08:18.209 18350.080 - 18450.905: 99.2012% ( 3) 00:08:18.209 18450.905 - 18551.729: 99.2293% ( 4) 00:08:18.209 18551.729 - 18652.554: 99.2573% ( 4) 00:08:18.209 18652.554 - 18753.378: 99.2783% ( 3) 00:08:18.209 18753.378 - 18854.203: 99.3063% ( 4) 00:08:18.209 18854.203 - 18955.028: 99.3344% ( 4) 00:08:18.209 18955.028 - 19055.852: 99.3554% ( 3) 00:08:18.209 19055.852 - 19156.677: 99.3764% ( 3) 00:08:18.209 19156.677 - 19257.502: 99.4044% ( 4) 00:08:18.209 19257.502 - 19358.326: 99.4325% ( 4) 00:08:18.209 19358.326 - 19459.151: 99.4605% ( 4) 00:08:18.209 19459.151 - 19559.975: 99.4885% ( 4) 00:08:18.209 19559.975 - 19660.800: 99.5165% ( 4) 00:08:18.209 19660.800 - 19761.625: 99.5446% ( 4) 00:08:18.209 19761.625 - 19862.449: 99.5516% ( 1) 00:08:18.209 26416.049 - 26617.698: 99.6076% ( 8) 00:08:18.209 26617.698 - 26819.348: 99.6567% ( 7) 00:08:18.209 26819.348 - 27020.997: 99.7127% ( 8) 00:08:18.209 27020.997 - 27222.646: 99.7618% ( 7) 00:08:18.209 27222.646 - 27424.295: 99.8178% ( 8) 00:08:18.209 27424.295 - 27625.945: 99.8739% ( 8) 00:08:18.209 27625.945 - 27827.594: 99.9229% ( 7) 00:08:18.209 27827.594 - 28029.243: 99.9720% ( 7) 00:08:18.209 28029.243 - 28230.892: 100.0000% ( 4) 00:08:18.209 00:08:18.209 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:18.209 ============================================================================== 00:08:18.209 Range in us Cumulative IO count 00:08:18.209 4411.077 - 4436.283: 0.0070% ( 1) 00:08:18.209 4436.283 - 4461.489: 0.0350% ( 4) 00:08:18.209 4461.489 - 4486.695: 0.0631% ( 4) 00:08:18.209 4486.695 - 4511.902: 0.1191% ( 8) 00:08:18.209 4511.902 - 4537.108: 0.1962% ( 11) 00:08:18.209 4537.108 - 4562.314: 0.2592% ( 9) 00:08:18.209 4562.314 - 4587.520: 0.2733% ( 2) 00:08:18.209 4587.520 - 4612.726: 0.2873% ( 2) 00:08:18.209 4612.726 - 4637.932: 0.3013% ( 2) 00:08:18.209 4637.932 - 4663.138: 0.3153% ( 2) 00:08:18.209 4663.138 - 4688.345: 0.3293% ( 2) 00:08:18.209 4688.345 - 4713.551: 0.3433% ( 2) 00:08:18.209 4713.551 - 4738.757: 0.3573% ( 2) 00:08:18.209 4738.757 - 4763.963: 0.3714% ( 2) 00:08:18.209 4763.963 - 4789.169: 0.3854% ( 2) 00:08:18.209 4789.169 - 4814.375: 0.3994% ( 2) 00:08:18.209 4814.375 - 4839.582: 0.4134% ( 2) 00:08:18.210 4839.582 - 4864.788: 0.4274% ( 2) 00:08:18.210 4864.788 - 4889.994: 0.4484% ( 3) 00:08:18.210 6604.012 - 6654.425: 0.4554% ( 1) 00:08:18.210 6654.425 - 6704.837: 0.4835% ( 4) 00:08:18.210 6704.837 - 6755.249: 0.5115% ( 4) 00:08:18.210 6755.249 - 6805.662: 0.5465% ( 5) 00:08:18.210 6805.662 - 6856.074: 0.5746% ( 4) 00:08:18.210 6856.074 - 6906.486: 0.6166% ( 6) 00:08:18.210 6906.486 - 6956.898: 0.6726% ( 8) 00:08:18.210 6956.898 - 7007.311: 0.7707% ( 14) 00:08:18.210 7007.311 - 7057.723: 0.7918% ( 3) 00:08:18.210 7057.723 - 7108.135: 0.8128% ( 3) 00:08:18.210 7108.135 - 7158.548: 0.8828% ( 10) 00:08:18.210 7158.548 - 7208.960: 0.9950% ( 16) 00:08:18.210 7208.960 - 7259.372: 1.1141% ( 17) 00:08:18.210 7259.372 - 7309.785: 1.3033% ( 27) 00:08:18.210 7309.785 - 7360.197: 1.6326% ( 47) 00:08:18.210 7360.197 - 7410.609: 1.8568% ( 32) 00:08:18.210 7410.609 - 7461.022: 2.0810% ( 32) 00:08:18.210 7461.022 - 7511.434: 2.3613% ( 40) 00:08:18.210 7511.434 - 7561.846: 2.6906% ( 47) 00:08:18.210 7561.846 - 7612.258: 2.8658% ( 25) 00:08:18.210 7612.258 - 7662.671: 2.9919% ( 18) 00:08:18.210 7662.671 - 7713.083: 3.1390% ( 21) 00:08:18.210 7713.083 - 7763.495: 3.3072% ( 24) 00:08:18.210 7763.495 - 7813.908: 3.5104% ( 29) 00:08:18.210 7813.908 - 7864.320: 3.8747% ( 52) 00:08:18.210 7864.320 - 7914.732: 4.5263% ( 93) 00:08:18.210 7914.732 - 7965.145: 5.4512% ( 132) 00:08:18.210 7965.145 - 8015.557: 6.9367% ( 212) 00:08:18.210 8015.557 - 8065.969: 8.6253% ( 241) 00:08:18.210 8065.969 - 8116.382: 10.7904% ( 309) 00:08:18.210 8116.382 - 8166.794: 13.4179% ( 375) 00:08:18.210 8166.794 - 8217.206: 16.8161% ( 485) 00:08:18.210 8217.206 - 8267.618: 20.2845% ( 495) 00:08:18.210 8267.618 - 8318.031: 24.2783% ( 570) 00:08:18.210 8318.031 - 8368.443: 28.3072% ( 575) 00:08:18.210 8368.443 - 8418.855: 32.1188% ( 544) 00:08:18.210 8418.855 - 8469.268: 36.0776% ( 565) 00:08:18.210 8469.268 - 8519.680: 40.6250% ( 649) 00:08:18.210 8519.680 - 8570.092: 45.0322% ( 629) 00:08:18.210 8570.092 - 8620.505: 49.6076% ( 653) 00:08:18.210 8620.505 - 8670.917: 53.8257% ( 602) 00:08:18.210 8670.917 - 8721.329: 58.2259% ( 628) 00:08:18.210 8721.329 - 8771.742: 62.4720% ( 606) 00:08:18.210 8771.742 - 8822.154: 66.1015% ( 518) 00:08:18.210 8822.154 - 8872.566: 69.2685% ( 452) 00:08:18.210 8872.566 - 8922.978: 71.7419% ( 353) 00:08:18.210 8922.978 - 8973.391: 74.6567% ( 416) 00:08:18.210 8973.391 - 9023.803: 76.7447% ( 298) 00:08:18.210 9023.803 - 9074.215: 78.5384% ( 256) 00:08:18.210 9074.215 - 9124.628: 80.0799% ( 220) 00:08:18.210 9124.628 - 9175.040: 81.4252% ( 192) 00:08:18.210 9175.040 - 9225.452: 82.6794% ( 179) 00:08:18.210 9225.452 - 9275.865: 83.8215% ( 163) 00:08:18.210 9275.865 - 9326.277: 84.7464% ( 132) 00:08:18.210 9326.277 - 9376.689: 85.6082% ( 123) 00:08:18.210 9376.689 - 9427.102: 86.0146% ( 58) 00:08:18.210 9427.102 - 9477.514: 86.5541% ( 77) 00:08:18.210 9477.514 - 9527.926: 87.1006% ( 78) 00:08:18.210 9527.926 - 9578.338: 87.6261% ( 75) 00:08:18.210 9578.338 - 9628.751: 88.2777% ( 93) 00:08:18.210 9628.751 - 9679.163: 88.7682% ( 70) 00:08:18.210 9679.163 - 9729.575: 89.1045% ( 48) 00:08:18.210 9729.575 - 9779.988: 89.4128% ( 44) 00:08:18.210 9779.988 - 9830.400: 89.6861% ( 39) 00:08:18.210 9830.400 - 9880.812: 89.9383% ( 36) 00:08:18.210 9880.812 - 9931.225: 90.1415% ( 29) 00:08:18.210 9931.225 - 9981.637: 90.3097% ( 24) 00:08:18.210 9981.637 - 10032.049: 90.4919% ( 26) 00:08:18.210 10032.049 - 10082.462: 90.8422% ( 50) 00:08:18.210 10082.462 - 10132.874: 91.0734% ( 33) 00:08:18.210 10132.874 - 10183.286: 91.2836% ( 30) 00:08:18.210 10183.286 - 10233.698: 91.5219% ( 34) 00:08:18.210 10233.698 - 10284.111: 91.7391% ( 31) 00:08:18.210 10284.111 - 10334.523: 91.9283% ( 27) 00:08:18.210 10334.523 - 10384.935: 92.1034% ( 25) 00:08:18.210 10384.935 - 10435.348: 92.3346% ( 33) 00:08:18.210 10435.348 - 10485.760: 92.5518% ( 31) 00:08:18.210 10485.760 - 10536.172: 92.7901% ( 34) 00:08:18.210 10536.172 - 10586.585: 92.9863% ( 28) 00:08:18.210 10586.585 - 10636.997: 93.1754% ( 27) 00:08:18.210 10636.997 - 10687.409: 93.3436% ( 24) 00:08:18.210 10687.409 - 10737.822: 93.4978% ( 22) 00:08:18.210 10737.822 - 10788.234: 93.6799% ( 26) 00:08:18.210 10788.234 - 10838.646: 93.8971% ( 31) 00:08:18.210 10838.646 - 10889.058: 94.1564% ( 37) 00:08:18.210 10889.058 - 10939.471: 94.3316% ( 25) 00:08:18.210 10939.471 - 10989.883: 94.4507% ( 17) 00:08:18.210 10989.883 - 11040.295: 94.5558% ( 15) 00:08:18.210 11040.295 - 11090.708: 94.6469% ( 13) 00:08:18.210 11090.708 - 11141.120: 94.7099% ( 9) 00:08:18.210 11141.120 - 11191.532: 94.7940% ( 12) 00:08:18.210 11191.532 - 11241.945: 94.8781% ( 12) 00:08:18.210 11241.945 - 11292.357: 94.9411% ( 9) 00:08:18.210 11292.357 - 11342.769: 95.0462% ( 15) 00:08:18.210 11342.769 - 11393.182: 95.1724% ( 18) 00:08:18.210 11393.182 - 11443.594: 95.3966% ( 32) 00:08:18.210 11443.594 - 11494.006: 95.4807% ( 12) 00:08:18.210 11494.006 - 11544.418: 95.5577% ( 11) 00:08:18.210 11544.418 - 11594.831: 95.6488% ( 13) 00:08:18.210 11594.831 - 11645.243: 95.7259% ( 11) 00:08:18.210 11645.243 - 11695.655: 95.8100% ( 12) 00:08:18.210 11695.655 - 11746.068: 95.8590% ( 7) 00:08:18.210 11746.068 - 11796.480: 95.9011% ( 6) 00:08:18.210 11796.480 - 11846.892: 95.9431% ( 6) 00:08:18.210 11846.892 - 11897.305: 95.9851% ( 6) 00:08:18.210 11897.305 - 11947.717: 96.0342% ( 7) 00:08:18.210 11947.717 - 11998.129: 96.0762% ( 6) 00:08:18.210 11998.129 - 12048.542: 96.1323% ( 8) 00:08:18.210 12048.542 - 12098.954: 96.2234% ( 13) 00:08:18.210 12098.954 - 12149.366: 96.2934% ( 10) 00:08:18.210 12149.366 - 12199.778: 96.3845% ( 13) 00:08:18.210 12199.778 - 12250.191: 96.4826% ( 14) 00:08:18.210 12250.191 - 12300.603: 96.5597% ( 11) 00:08:18.210 12300.603 - 12351.015: 96.6228% ( 9) 00:08:18.210 12351.015 - 12401.428: 96.7068% ( 12) 00:08:18.210 12401.428 - 12451.840: 96.7559% ( 7) 00:08:18.210 12451.840 - 12502.252: 96.8260% ( 10) 00:08:18.210 12502.252 - 12552.665: 96.8890% ( 9) 00:08:18.210 12552.665 - 12603.077: 96.9451% ( 8) 00:08:18.210 12603.077 - 12653.489: 97.0151% ( 10) 00:08:18.210 12653.489 - 12703.902: 97.0852% ( 10) 00:08:18.210 12703.902 - 12754.314: 97.1342% ( 7) 00:08:18.210 12754.314 - 12804.726: 97.1973% ( 9) 00:08:18.210 12804.726 - 12855.138: 97.2464% ( 7) 00:08:18.210 12855.138 - 12905.551: 97.3024% ( 8) 00:08:18.210 12905.551 - 13006.375: 97.4005% ( 14) 00:08:18.210 13006.375 - 13107.200: 97.5056% ( 15) 00:08:18.210 13107.200 - 13208.025: 97.5897% ( 12) 00:08:18.210 13208.025 - 13308.849: 97.6738% ( 12) 00:08:18.210 13308.849 - 13409.674: 97.8629% ( 27) 00:08:18.210 13409.674 - 13510.498: 97.9891% ( 18) 00:08:18.210 13510.498 - 13611.323: 98.0661% ( 11) 00:08:18.210 13611.323 - 13712.148: 98.1292% ( 9) 00:08:18.210 13712.148 - 13812.972: 98.1783% ( 7) 00:08:18.210 13812.972 - 13913.797: 98.2343% ( 8) 00:08:18.210 13913.797 - 14014.622: 98.3114% ( 11) 00:08:18.210 14014.622 - 14115.446: 98.4445% ( 19) 00:08:18.210 14115.446 - 14216.271: 98.5216% ( 11) 00:08:18.210 14216.271 - 14317.095: 98.5916% ( 10) 00:08:18.210 14317.095 - 14417.920: 98.6197% ( 4) 00:08:18.210 14417.920 - 14518.745: 98.6477% ( 4) 00:08:18.210 14518.745 - 14619.569: 98.6547% ( 1) 00:08:18.210 15022.868 - 15123.692: 98.6897% ( 5) 00:08:18.210 15123.692 - 15224.517: 98.7528% ( 9) 00:08:18.210 15224.517 - 15325.342: 98.8229% ( 10) 00:08:18.210 15325.342 - 15426.166: 98.8789% ( 8) 00:08:18.210 15426.166 - 15526.991: 98.9280% ( 7) 00:08:18.210 15526.991 - 15627.815: 98.9770% ( 7) 00:08:18.210 15627.815 - 15728.640: 99.0050% ( 4) 00:08:18.210 15728.640 - 15829.465: 99.0331% ( 4) 00:08:18.210 15829.465 - 15930.289: 99.0611% ( 4) 00:08:18.210 15930.289 - 16031.114: 99.0891% ( 4) 00:08:18.210 16031.114 - 16131.938: 99.1031% ( 2) 00:08:18.210 18551.729 - 18652.554: 99.1242% ( 3) 00:08:18.210 18652.554 - 18753.378: 99.1522% ( 4) 00:08:18.210 18753.378 - 18854.203: 99.1802% ( 4) 00:08:18.210 18854.203 - 18955.028: 99.2082% ( 4) 00:08:18.210 18955.028 - 19055.852: 99.2503% ( 6) 00:08:18.210 19055.852 - 19156.677: 99.2713% ( 3) 00:08:18.210 19156.677 - 19257.502: 99.2993% ( 4) 00:08:18.210 19257.502 - 19358.326: 99.3203% ( 3) 00:08:18.210 19358.326 - 19459.151: 99.3484% ( 4) 00:08:18.210 19459.151 - 19559.975: 99.3764% ( 4) 00:08:18.210 19559.975 - 19660.800: 99.4184% ( 6) 00:08:18.210 19660.800 - 19761.625: 99.4395% ( 3) 00:08:18.210 19761.625 - 19862.449: 99.4675% ( 4) 00:08:18.210 19862.449 - 19963.274: 99.4955% ( 4) 00:08:18.210 19963.274 - 20064.098: 99.5095% ( 2) 00:08:18.210 20064.098 - 20164.923: 99.5376% ( 4) 00:08:18.210 20164.923 - 20265.748: 99.5516% ( 2) 00:08:18.210 26214.400 - 26416.049: 99.5656% ( 2) 00:08:18.210 26416.049 - 26617.698: 99.5726% ( 1) 00:08:18.210 26617.698 - 26819.348: 99.5936% ( 3) 00:08:18.210 26819.348 - 27020.997: 99.7618% ( 24) 00:08:18.210 27020.997 - 27222.646: 99.8318% ( 10) 00:08:18.210 27222.646 - 27424.295: 99.8669% ( 5) 00:08:18.210 27424.295 - 27625.945: 99.9229% ( 8) 00:08:18.210 27625.945 - 27827.594: 99.9720% ( 7) 00:08:18.210 27827.594 - 28029.243: 100.0000% ( 4) 00:08:18.211 00:08:18.211 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:18.211 ============================================================================== 00:08:18.211 Range in us Cumulative IO count 00:08:18.211 4159.015 - 4184.222: 0.0140% ( 2) 00:08:18.211 4184.222 - 4209.428: 0.0280% ( 2) 00:08:18.211 4209.428 - 4234.634: 0.0561% ( 4) 00:08:18.211 4234.634 - 4259.840: 0.0911% ( 5) 00:08:18.211 4259.840 - 4285.046: 0.1331% ( 6) 00:08:18.211 4285.046 - 4310.252: 0.1612% ( 4) 00:08:18.211 4310.252 - 4335.458: 0.2242% ( 9) 00:08:18.211 4335.458 - 4360.665: 0.2733% ( 7) 00:08:18.211 4360.665 - 4385.871: 0.3083% ( 5) 00:08:18.211 4385.871 - 4411.077: 0.3363% ( 4) 00:08:18.211 4411.077 - 4436.283: 0.3503% ( 2) 00:08:18.211 4436.283 - 4461.489: 0.3714% ( 3) 00:08:18.211 4461.489 - 4486.695: 0.3854% ( 2) 00:08:18.211 4486.695 - 4511.902: 0.3994% ( 2) 00:08:18.211 4511.902 - 4537.108: 0.4134% ( 2) 00:08:18.211 4537.108 - 4562.314: 0.4274% ( 2) 00:08:18.211 4562.314 - 4587.520: 0.4484% ( 3) 00:08:18.211 6654.425 - 6704.837: 0.4624% ( 2) 00:08:18.211 6704.837 - 6755.249: 0.5255% ( 9) 00:08:18.211 6755.249 - 6805.662: 0.6306% ( 15) 00:08:18.211 6805.662 - 6856.074: 0.8198% ( 27) 00:08:18.211 6856.074 - 6906.486: 1.0020% ( 26) 00:08:18.211 6906.486 - 6956.898: 1.1351% ( 19) 00:08:18.211 6956.898 - 7007.311: 1.2192% ( 12) 00:08:18.211 7007.311 - 7057.723: 1.2752% ( 8) 00:08:18.211 7057.723 - 7108.135: 1.3103% ( 5) 00:08:18.211 7108.135 - 7158.548: 1.3313% ( 3) 00:08:18.211 7158.548 - 7208.960: 1.3663% ( 5) 00:08:18.211 7208.960 - 7259.372: 1.4224% ( 8) 00:08:18.211 7259.372 - 7309.785: 1.5275% ( 15) 00:08:18.211 7309.785 - 7360.197: 1.6606% ( 19) 00:08:18.211 7360.197 - 7410.609: 1.8848% ( 32) 00:08:18.211 7410.609 - 7461.022: 2.1300% ( 35) 00:08:18.211 7461.022 - 7511.434: 2.4033% ( 39) 00:08:18.211 7511.434 - 7561.846: 2.6626% ( 37) 00:08:18.211 7561.846 - 7612.258: 2.8728% ( 30) 00:08:18.211 7612.258 - 7662.671: 2.9709% ( 14) 00:08:18.211 7662.671 - 7713.083: 3.0830% ( 16) 00:08:18.211 7713.083 - 7763.495: 3.1670% ( 12) 00:08:18.211 7763.495 - 7813.908: 3.3352% ( 24) 00:08:18.211 7813.908 - 7864.320: 3.6225% ( 41) 00:08:18.211 7864.320 - 7914.732: 4.2180% ( 85) 00:08:18.211 7914.732 - 7965.145: 5.1499% ( 133) 00:08:18.211 7965.145 - 8015.557: 6.7195% ( 224) 00:08:18.211 8015.557 - 8065.969: 8.4851% ( 252) 00:08:18.211 8065.969 - 8116.382: 10.8254% ( 334) 00:08:18.211 8116.382 - 8166.794: 13.5720% ( 392) 00:08:18.211 8166.794 - 8217.206: 16.3327% ( 394) 00:08:18.211 8217.206 - 8267.618: 19.5698% ( 462) 00:08:18.211 8267.618 - 8318.031: 23.5987% ( 575) 00:08:18.211 8318.031 - 8368.443: 27.8447% ( 606) 00:08:18.211 8368.443 - 8418.855: 32.2379% ( 627) 00:08:18.211 8418.855 - 8469.268: 36.9745% ( 676) 00:08:18.211 8469.268 - 8519.680: 41.7601% ( 683) 00:08:18.211 8519.680 - 8570.092: 46.4406% ( 668) 00:08:18.211 8570.092 - 8620.505: 50.7637% ( 617) 00:08:18.211 8620.505 - 8670.917: 54.9327% ( 595) 00:08:18.211 8670.917 - 8721.329: 58.5762% ( 520) 00:08:18.211 8721.329 - 8771.742: 62.7382% ( 594) 00:08:18.211 8771.742 - 8822.154: 66.1225% ( 483) 00:08:18.211 8822.154 - 8872.566: 69.0863% ( 423) 00:08:18.211 8872.566 - 8922.978: 71.8119% ( 389) 00:08:18.211 8922.978 - 8973.391: 74.5376% ( 389) 00:08:18.211 8973.391 - 9023.803: 76.6886% ( 307) 00:08:18.211 9023.803 - 9074.215: 78.2791% ( 227) 00:08:18.211 9074.215 - 9124.628: 79.5544% ( 182) 00:08:18.211 9124.628 - 9175.040: 80.7105% ( 165) 00:08:18.211 9175.040 - 9225.452: 82.1469% ( 205) 00:08:18.211 9225.452 - 9275.865: 82.9737% ( 118) 00:08:18.211 9275.865 - 9326.277: 83.8075% ( 119) 00:08:18.211 9326.277 - 9376.689: 84.6342% ( 118) 00:08:18.211 9376.689 - 9427.102: 85.4891% ( 122) 00:08:18.211 9427.102 - 9477.514: 86.2108% ( 103) 00:08:18.211 9477.514 - 9527.926: 86.8624% ( 93) 00:08:18.211 9527.926 - 9578.338: 87.4089% ( 78) 00:08:18.211 9578.338 - 9628.751: 87.8363% ( 61) 00:08:18.211 9628.751 - 9679.163: 88.2988% ( 66) 00:08:18.211 9679.163 - 9729.575: 88.8173% ( 74) 00:08:18.211 9729.575 - 9779.988: 89.5320% ( 102) 00:08:18.211 9779.988 - 9830.400: 89.9594% ( 61) 00:08:18.211 9830.400 - 9880.812: 90.3377% ( 54) 00:08:18.211 9880.812 - 9931.225: 90.6460% ( 44) 00:08:18.211 9931.225 - 9981.637: 90.8492% ( 29) 00:08:18.211 9981.637 - 10032.049: 91.1295% ( 40) 00:08:18.211 10032.049 - 10082.462: 91.3327% ( 29) 00:08:18.211 10082.462 - 10132.874: 91.5219% ( 27) 00:08:18.211 10132.874 - 10183.286: 91.6550% ( 19) 00:08:18.211 10183.286 - 10233.698: 91.8091% ( 22) 00:08:18.211 10233.698 - 10284.111: 92.0123% ( 29) 00:08:18.211 10284.111 - 10334.523: 92.0964% ( 12) 00:08:18.211 10334.523 - 10384.935: 92.2015% ( 15) 00:08:18.211 10384.935 - 10435.348: 92.3136% ( 16) 00:08:18.211 10435.348 - 10485.760: 92.4187% ( 15) 00:08:18.211 10485.760 - 10536.172: 92.5589% ( 20) 00:08:18.211 10536.172 - 10586.585: 92.8531% ( 42) 00:08:18.211 10586.585 - 10636.997: 93.0353% ( 26) 00:08:18.211 10636.997 - 10687.409: 93.2245% ( 27) 00:08:18.211 10687.409 - 10737.822: 93.4207% ( 28) 00:08:18.211 10737.822 - 10788.234: 93.5748% ( 22) 00:08:18.211 10788.234 - 10838.646: 93.7710% ( 28) 00:08:18.211 10838.646 - 10889.058: 94.0092% ( 34) 00:08:18.211 10889.058 - 10939.471: 94.3105% ( 43) 00:08:18.211 10939.471 - 10989.883: 94.5067% ( 28) 00:08:18.211 10989.883 - 11040.295: 94.7099% ( 29) 00:08:18.211 11040.295 - 11090.708: 94.8360% ( 18) 00:08:18.211 11090.708 - 11141.120: 94.9552% ( 17) 00:08:18.211 11141.120 - 11191.532: 95.0883% ( 19) 00:08:18.211 11191.532 - 11241.945: 95.2144% ( 18) 00:08:18.211 11241.945 - 11292.357: 95.3826% ( 24) 00:08:18.211 11292.357 - 11342.769: 95.5017% ( 17) 00:08:18.211 11342.769 - 11393.182: 95.6278% ( 18) 00:08:18.211 11393.182 - 11443.594: 95.7890% ( 23) 00:08:18.211 11443.594 - 11494.006: 95.8590% ( 10) 00:08:18.211 11494.006 - 11544.418: 95.9221% ( 9) 00:08:18.211 11544.418 - 11594.831: 95.9922% ( 10) 00:08:18.211 11594.831 - 11645.243: 96.0552% ( 9) 00:08:18.211 11645.243 - 11695.655: 96.1113% ( 8) 00:08:18.211 11695.655 - 11746.068: 96.1533% ( 6) 00:08:18.211 11746.068 - 11796.480: 96.2374% ( 12) 00:08:18.211 11796.480 - 11846.892: 96.3285% ( 13) 00:08:18.211 11846.892 - 11897.305: 96.3845% ( 8) 00:08:18.211 11897.305 - 11947.717: 96.4266% ( 6) 00:08:18.211 11947.717 - 11998.129: 96.4826% ( 8) 00:08:18.211 11998.129 - 12048.542: 96.5177% ( 5) 00:08:18.211 12048.542 - 12098.954: 96.5527% ( 5) 00:08:18.211 12098.954 - 12149.366: 96.5947% ( 6) 00:08:18.211 12149.366 - 12199.778: 96.6228% ( 4) 00:08:18.211 12199.778 - 12250.191: 96.6718% ( 7) 00:08:18.211 12250.191 - 12300.603: 96.7489% ( 11) 00:08:18.211 12300.603 - 12351.015: 96.8260% ( 11) 00:08:18.211 12351.015 - 12401.428: 96.8680% ( 6) 00:08:18.211 12401.428 - 12451.840: 96.9381% ( 10) 00:08:18.211 12451.840 - 12502.252: 96.9801% ( 6) 00:08:18.211 12502.252 - 12552.665: 97.0291% ( 7) 00:08:18.211 12552.665 - 12603.077: 97.0572% ( 4) 00:08:18.211 12603.077 - 12653.489: 97.0782% ( 3) 00:08:18.211 12653.489 - 12703.902: 97.1272% ( 7) 00:08:18.211 12703.902 - 12754.314: 97.1833% ( 8) 00:08:18.211 12754.314 - 12804.726: 97.2323% ( 7) 00:08:18.211 12804.726 - 12855.138: 97.3304% ( 14) 00:08:18.211 12855.138 - 12905.551: 97.4285% ( 14) 00:08:18.211 12905.551 - 13006.375: 97.5056% ( 11) 00:08:18.211 13006.375 - 13107.200: 97.5827% ( 11) 00:08:18.211 13107.200 - 13208.025: 97.6387% ( 8) 00:08:18.211 13208.025 - 13308.849: 97.6948% ( 8) 00:08:18.211 13308.849 - 13409.674: 97.7578% ( 9) 00:08:18.211 13409.674 - 13510.498: 97.8139% ( 8) 00:08:18.211 13510.498 - 13611.323: 97.8559% ( 6) 00:08:18.211 13611.323 - 13712.148: 97.9050% ( 7) 00:08:18.211 13712.148 - 13812.972: 97.9260% ( 3) 00:08:18.212 13812.972 - 13913.797: 97.9610% ( 5) 00:08:18.212 13913.797 - 14014.622: 98.0241% ( 9) 00:08:18.212 14014.622 - 14115.446: 98.1222% ( 14) 00:08:18.212 14115.446 - 14216.271: 98.1923% ( 10) 00:08:18.212 14216.271 - 14317.095: 98.2413% ( 7) 00:08:18.212 14317.095 - 14417.920: 98.2974% ( 8) 00:08:18.212 14417.920 - 14518.745: 98.3464% ( 7) 00:08:18.212 14518.745 - 14619.569: 98.4445% ( 14) 00:08:18.212 14619.569 - 14720.394: 98.5916% ( 21) 00:08:18.212 14720.394 - 14821.218: 98.6687% ( 11) 00:08:18.212 14821.218 - 14922.043: 98.7668% ( 14) 00:08:18.212 14922.043 - 15022.868: 98.8999% ( 19) 00:08:18.212 15022.868 - 15123.692: 98.9700% ( 10) 00:08:18.212 15123.692 - 15224.517: 98.9980% ( 4) 00:08:18.212 15224.517 - 15325.342: 99.0261% ( 4) 00:08:18.212 15325.342 - 15426.166: 99.0541% ( 4) 00:08:18.212 15426.166 - 15526.991: 99.0821% ( 4) 00:08:18.212 15526.991 - 15627.815: 99.1031% ( 3) 00:08:18.212 18450.905 - 18551.729: 99.1312% ( 4) 00:08:18.212 18551.729 - 18652.554: 99.1592% ( 4) 00:08:18.212 18652.554 - 18753.378: 99.1802% ( 3) 00:08:18.212 18753.378 - 18854.203: 99.2082% ( 4) 00:08:18.212 18854.203 - 18955.028: 99.2363% ( 4) 00:08:18.212 18955.028 - 19055.852: 99.2643% ( 4) 00:08:18.212 19055.852 - 19156.677: 99.2853% ( 3) 00:08:18.212 19156.677 - 19257.502: 99.3133% ( 4) 00:08:18.212 19257.502 - 19358.326: 99.3414% ( 4) 00:08:18.212 19358.326 - 19459.151: 99.3694% ( 4) 00:08:18.212 19459.151 - 19559.975: 99.3904% ( 3) 00:08:18.212 19559.975 - 19660.800: 99.4184% ( 4) 00:08:18.212 19660.800 - 19761.625: 99.4465% ( 4) 00:08:18.212 19761.625 - 19862.449: 99.4745% ( 4) 00:08:18.212 19862.449 - 19963.274: 99.4955% ( 3) 00:08:18.212 19963.274 - 20064.098: 99.5235% ( 4) 00:08:18.212 20064.098 - 20164.923: 99.5446% ( 3) 00:08:18.212 20164.923 - 20265.748: 99.5516% ( 1) 00:08:18.212 26416.049 - 26617.698: 99.6216% ( 10) 00:08:18.212 26617.698 - 26819.348: 99.7127% ( 13) 00:08:18.212 26819.348 - 27020.997: 99.8038% ( 13) 00:08:18.212 27020.997 - 27222.646: 99.9019% ( 14) 00:08:18.212 27222.646 - 27424.295: 99.9930% ( 13) 00:08:18.212 27424.295 - 27625.945: 100.0000% ( 1) 00:08:18.212 00:08:18.212 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:18.212 ============================================================================== 00:08:18.212 Range in us Cumulative IO count 00:08:18.212 3780.923 - 3806.129: 0.0420% ( 6) 00:08:18.212 3806.129 - 3831.335: 0.0841% ( 6) 00:08:18.212 3831.335 - 3856.542: 0.1121% ( 4) 00:08:18.212 3856.542 - 3881.748: 0.1471% ( 5) 00:08:18.212 3881.748 - 3906.954: 0.2172% ( 10) 00:08:18.212 3906.954 - 3932.160: 0.2733% ( 8) 00:08:18.212 3932.160 - 3957.366: 0.3013% ( 4) 00:08:18.212 3957.366 - 3982.572: 0.3153% ( 2) 00:08:18.212 3982.572 - 4007.778: 0.3363% ( 3) 00:08:18.212 4007.778 - 4032.985: 0.3503% ( 2) 00:08:18.212 4032.985 - 4058.191: 0.3643% ( 2) 00:08:18.212 4058.191 - 4083.397: 0.3854% ( 3) 00:08:18.212 4083.397 - 4108.603: 0.3994% ( 2) 00:08:18.212 4108.603 - 4133.809: 0.4134% ( 2) 00:08:18.212 4133.809 - 4159.015: 0.4274% ( 2) 00:08:18.212 4159.015 - 4184.222: 0.4414% ( 2) 00:08:18.212 4184.222 - 4209.428: 0.4484% ( 1) 00:08:18.212 6452.775 - 6503.188: 0.4624% ( 2) 00:08:18.212 6503.188 - 6553.600: 0.4905% ( 4) 00:08:18.212 6553.600 - 6604.012: 0.5185% ( 4) 00:08:18.212 6604.012 - 6654.425: 0.5956% ( 11) 00:08:18.212 6654.425 - 6704.837: 0.7567% ( 23) 00:08:18.212 6704.837 - 6755.249: 0.8338% ( 11) 00:08:18.212 6755.249 - 6805.662: 0.8899% ( 8) 00:08:18.212 6805.662 - 6856.074: 0.9739% ( 12) 00:08:18.212 6856.074 - 6906.486: 1.0720% ( 14) 00:08:18.212 6906.486 - 6956.898: 1.2122% ( 20) 00:08:18.212 6956.898 - 7007.311: 1.2612% ( 7) 00:08:18.212 7007.311 - 7057.723: 1.3173% ( 8) 00:08:18.212 7057.723 - 7108.135: 1.3663% ( 7) 00:08:18.212 7108.135 - 7158.548: 1.4364% ( 10) 00:08:18.212 7158.548 - 7208.960: 1.4924% ( 8) 00:08:18.212 7208.960 - 7259.372: 1.6256% ( 19) 00:08:18.212 7259.372 - 7309.785: 1.7867% ( 23) 00:08:18.212 7309.785 - 7360.197: 2.0460% ( 37) 00:08:18.212 7360.197 - 7410.609: 2.2632% ( 31) 00:08:18.212 7410.609 - 7461.022: 2.4103% ( 21) 00:08:18.212 7461.022 - 7511.434: 2.5084% ( 14) 00:08:18.212 7511.434 - 7561.846: 2.6065% ( 14) 00:08:18.212 7561.846 - 7612.258: 2.6696% ( 9) 00:08:18.212 7612.258 - 7662.671: 2.7396% ( 10) 00:08:18.212 7662.671 - 7713.083: 2.8027% ( 9) 00:08:18.212 7713.083 - 7763.495: 2.9288% ( 18) 00:08:18.212 7763.495 - 7813.908: 3.1740% ( 35) 00:08:18.212 7813.908 - 7864.320: 3.6996% ( 75) 00:08:18.212 7864.320 - 7914.732: 4.4353% ( 105) 00:08:18.212 7914.732 - 7965.145: 5.3952% ( 137) 00:08:18.212 7965.145 - 8015.557: 6.5022% ( 158) 00:08:18.212 8015.557 - 8065.969: 8.5132% ( 287) 00:08:18.212 8065.969 - 8116.382: 11.1547% ( 377) 00:08:18.212 8116.382 - 8166.794: 13.7752% ( 374) 00:08:18.212 8166.794 - 8217.206: 16.8021% ( 432) 00:08:18.212 8217.206 - 8267.618: 20.3896% ( 512) 00:08:18.212 8267.618 - 8318.031: 24.1802% ( 541) 00:08:18.212 8318.031 - 8368.443: 28.2581% ( 582) 00:08:18.212 8368.443 - 8418.855: 32.5182% ( 608) 00:08:18.212 8418.855 - 8469.268: 36.9815% ( 637) 00:08:18.212 8469.268 - 8519.680: 41.4308% ( 635) 00:08:18.212 8519.680 - 8570.092: 45.7960% ( 623) 00:08:18.212 8570.092 - 8620.505: 49.8669% ( 581) 00:08:18.212 8620.505 - 8670.917: 54.5404% ( 667) 00:08:18.212 8670.917 - 8721.329: 58.7934% ( 607) 00:08:18.212 8721.329 - 8771.742: 62.9274% ( 590) 00:08:18.212 8771.742 - 8822.154: 66.4098% ( 497) 00:08:18.212 8822.154 - 8872.566: 69.7520% ( 477) 00:08:18.212 8872.566 - 8922.978: 72.4636% ( 387) 00:08:18.212 8922.978 - 8973.391: 74.4815% ( 288) 00:08:18.212 8973.391 - 9023.803: 76.5695% ( 298) 00:08:18.212 9023.803 - 9074.215: 78.1390% ( 224) 00:08:18.212 9074.215 - 9124.628: 79.6104% ( 210) 00:08:18.212 9124.628 - 9175.040: 81.1309% ( 217) 00:08:18.212 9175.040 - 9225.452: 82.1889% ( 151) 00:08:18.212 9225.452 - 9275.865: 83.0367% ( 121) 00:08:18.212 9275.865 - 9326.277: 84.0036% ( 138) 00:08:18.212 9326.277 - 9376.689: 84.9566% ( 136) 00:08:18.212 9376.689 - 9427.102: 85.6642% ( 101) 00:08:18.212 9427.102 - 9477.514: 86.1827% ( 74) 00:08:18.212 9477.514 - 9527.926: 86.7573% ( 82) 00:08:18.212 9527.926 - 9578.338: 87.2968% ( 77) 00:08:18.212 9578.338 - 9628.751: 87.9905% ( 99) 00:08:18.212 9628.751 - 9679.163: 88.5650% ( 82) 00:08:18.212 9679.163 - 9729.575: 88.9994% ( 62) 00:08:18.212 9729.575 - 9779.988: 89.3358% ( 48) 00:08:18.212 9779.988 - 9830.400: 89.7211% ( 55) 00:08:18.212 9830.400 - 9880.812: 90.1345% ( 59) 00:08:18.212 9880.812 - 9931.225: 90.4148% ( 40) 00:08:18.212 9931.225 - 9981.637: 90.6670% ( 36) 00:08:18.212 9981.637 - 10032.049: 90.8632% ( 28) 00:08:18.212 10032.049 - 10082.462: 91.0594% ( 28) 00:08:18.212 10082.462 - 10132.874: 91.2346% ( 25) 00:08:18.212 10132.874 - 10183.286: 91.4027% ( 24) 00:08:18.212 10183.286 - 10233.698: 91.6270% ( 32) 00:08:18.212 10233.698 - 10284.111: 91.9423% ( 45) 00:08:18.212 10284.111 - 10334.523: 92.1595% ( 31) 00:08:18.212 10334.523 - 10384.935: 92.3276% ( 24) 00:08:18.212 10384.935 - 10435.348: 92.6570% ( 47) 00:08:18.212 10435.348 - 10485.760: 92.9372% ( 40) 00:08:18.212 10485.760 - 10536.172: 93.1334% ( 28) 00:08:18.212 10536.172 - 10586.585: 93.2595% ( 18) 00:08:18.212 10586.585 - 10636.997: 93.4277% ( 24) 00:08:18.212 10636.997 - 10687.409: 93.5678% ( 20) 00:08:18.212 10687.409 - 10737.822: 93.7500% ( 26) 00:08:18.212 10737.822 - 10788.234: 93.9041% ( 22) 00:08:18.212 10788.234 - 10838.646: 94.0583% ( 22) 00:08:18.212 10838.646 - 10889.058: 94.4226% ( 52) 00:08:18.212 10889.058 - 10939.471: 94.5908% ( 24) 00:08:18.212 10939.471 - 10989.883: 94.7379% ( 21) 00:08:18.212 10989.883 - 11040.295: 94.9131% ( 25) 00:08:18.212 11040.295 - 11090.708: 95.1163% ( 29) 00:08:18.212 11090.708 - 11141.120: 95.2915% ( 25) 00:08:18.212 11141.120 - 11191.532: 95.4666% ( 25) 00:08:18.212 11191.532 - 11241.945: 95.5998% ( 19) 00:08:18.212 11241.945 - 11292.357: 95.6698% ( 10) 00:08:18.212 11292.357 - 11342.769: 95.7049% ( 5) 00:08:18.212 11342.769 - 11393.182: 95.7539% ( 7) 00:08:18.212 11393.182 - 11443.594: 95.8030% ( 7) 00:08:18.212 11443.594 - 11494.006: 95.8450% ( 6) 00:08:18.212 11494.006 - 11544.418: 95.9011% ( 8) 00:08:18.212 11544.418 - 11594.831: 95.9291% ( 4) 00:08:18.212 11594.831 - 11645.243: 95.9571% ( 4) 00:08:18.212 11645.243 - 11695.655: 95.9851% ( 4) 00:08:18.212 11695.655 - 11746.068: 96.0202% ( 5) 00:08:18.212 11746.068 - 11796.480: 96.0412% ( 3) 00:08:18.212 11796.480 - 11846.892: 96.0692% ( 4) 00:08:18.212 11846.892 - 11897.305: 96.0973% ( 4) 00:08:18.212 11897.305 - 11947.717: 96.1883% ( 13) 00:08:18.212 11947.717 - 11998.129: 96.3075% ( 17) 00:08:18.212 11998.129 - 12048.542: 96.4406% ( 19) 00:08:18.212 12048.542 - 12098.954: 96.5177% ( 11) 00:08:18.212 12098.954 - 12149.366: 96.5947% ( 11) 00:08:18.212 12149.366 - 12199.778: 96.6858% ( 13) 00:08:18.212 12199.778 - 12250.191: 96.7629% ( 11) 00:08:18.212 12250.191 - 12300.603: 96.8820% ( 17) 00:08:18.212 12300.603 - 12351.015: 97.0081% ( 18) 00:08:18.212 12351.015 - 12401.428: 97.1342% ( 18) 00:08:18.212 12401.428 - 12451.840: 97.1973% ( 9) 00:08:18.212 12451.840 - 12502.252: 97.2604% ( 9) 00:08:18.212 12502.252 - 12552.665: 97.3164% ( 8) 00:08:18.212 12552.665 - 12603.077: 97.3795% ( 9) 00:08:18.212 12603.077 - 12653.489: 97.4355% ( 8) 00:08:18.213 12653.489 - 12703.902: 97.4916% ( 8) 00:08:18.213 12703.902 - 12754.314: 97.5617% ( 10) 00:08:18.213 12754.314 - 12804.726: 97.6177% ( 8) 00:08:18.213 12804.726 - 12855.138: 97.6527% ( 5) 00:08:18.213 12855.138 - 12905.551: 97.6808% ( 4) 00:08:18.213 12905.551 - 13006.375: 97.7158% ( 5) 00:08:18.213 13006.375 - 13107.200: 97.7438% ( 4) 00:08:18.213 13107.200 - 13208.025: 97.7578% ( 2) 00:08:18.213 13611.323 - 13712.148: 97.7649% ( 1) 00:08:18.213 13712.148 - 13812.972: 97.8069% ( 6) 00:08:18.213 13812.972 - 13913.797: 97.9260% ( 17) 00:08:18.213 13913.797 - 14014.622: 98.0591% ( 19) 00:08:18.213 14014.622 - 14115.446: 98.1082% ( 7) 00:08:18.213 14115.446 - 14216.271: 98.1642% ( 8) 00:08:18.213 14216.271 - 14317.095: 98.2133% ( 7) 00:08:18.213 14317.095 - 14417.920: 98.2693% ( 8) 00:08:18.213 14417.920 - 14518.745: 98.3184% ( 7) 00:08:18.213 14518.745 - 14619.569: 98.3744% ( 8) 00:08:18.213 14619.569 - 14720.394: 98.4515% ( 11) 00:08:18.213 14720.394 - 14821.218: 98.5566% ( 15) 00:08:18.213 14821.218 - 14922.043: 98.6687% ( 16) 00:08:18.213 14922.043 - 15022.868: 98.7248% ( 8) 00:08:18.213 15022.868 - 15123.692: 98.7598% ( 5) 00:08:18.213 15123.692 - 15224.517: 98.7948% ( 5) 00:08:18.213 15224.517 - 15325.342: 98.8229% ( 4) 00:08:18.213 15325.342 - 15426.166: 98.8579% ( 5) 00:08:18.213 15426.166 - 15526.991: 98.8929% ( 5) 00:08:18.213 15526.991 - 15627.815: 98.9420% ( 7) 00:08:18.213 15627.815 - 15728.640: 99.0261% ( 12) 00:08:18.213 15728.640 - 15829.465: 99.0821% ( 8) 00:08:18.213 15829.465 - 15930.289: 99.1031% ( 3) 00:08:18.213 18854.203 - 18955.028: 99.1172% ( 2) 00:08:18.213 18955.028 - 19055.852: 99.1452% ( 4) 00:08:18.213 19055.852 - 19156.677: 99.1732% ( 4) 00:08:18.213 19156.677 - 19257.502: 99.1942% ( 3) 00:08:18.213 19257.502 - 19358.326: 99.2363% ( 6) 00:08:18.213 19358.326 - 19459.151: 99.2713% ( 5) 00:08:18.213 19459.151 - 19559.975: 99.3063% ( 5) 00:08:18.213 19559.975 - 19660.800: 99.3414% ( 5) 00:08:18.213 19660.800 - 19761.625: 99.3624% ( 3) 00:08:18.213 19761.625 - 19862.449: 99.4114% ( 7) 00:08:18.213 19862.449 - 19963.274: 99.4395% ( 4) 00:08:18.213 19963.274 - 20064.098: 99.4535% ( 2) 00:08:18.213 20064.098 - 20164.923: 99.4815% ( 4) 00:08:18.213 20164.923 - 20265.748: 99.5025% ( 3) 00:08:18.213 20265.748 - 20366.572: 99.5165% ( 2) 00:08:18.213 20366.572 - 20467.397: 99.5376% ( 3) 00:08:18.213 20467.397 - 20568.222: 99.5516% ( 2) 00:08:18.213 25710.277 - 25811.102: 99.5866% ( 5) 00:08:18.213 25811.102 - 26012.751: 99.6076% ( 3) 00:08:18.213 26012.751 - 26214.400: 99.6707% ( 9) 00:08:18.213 26214.400 - 26416.049: 99.7337% ( 9) 00:08:18.213 26416.049 - 26617.698: 99.8178% ( 12) 00:08:18.213 26617.698 - 26819.348: 99.9019% ( 12) 00:08:18.213 26819.348 - 27020.997: 99.9860% ( 12) 00:08:18.213 27020.997 - 27222.646: 100.0000% ( 2) 00:08:18.213 00:08:18.213 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:18.213 ============================================================================== 00:08:18.213 Range in us Cumulative IO count 00:08:18.213 3528.862 - 3554.068: 0.0140% ( 2) 00:08:18.213 3554.068 - 3579.274: 0.0350% ( 3) 00:08:18.213 3579.274 - 3604.480: 0.0981% ( 9) 00:08:18.213 3604.480 - 3629.686: 0.1822% ( 12) 00:08:18.213 3629.686 - 3654.892: 0.2663% ( 12) 00:08:18.213 3654.892 - 3680.098: 0.2943% ( 4) 00:08:18.213 3680.098 - 3705.305: 0.3153% ( 3) 00:08:18.213 3705.305 - 3730.511: 0.3293% ( 2) 00:08:18.213 3730.511 - 3755.717: 0.3363% ( 1) 00:08:18.213 3755.717 - 3780.923: 0.3503% ( 2) 00:08:18.213 3780.923 - 3806.129: 0.3643% ( 2) 00:08:18.213 3806.129 - 3831.335: 0.3784% ( 2) 00:08:18.213 3831.335 - 3856.542: 0.3924% ( 2) 00:08:18.213 3856.542 - 3881.748: 0.4064% ( 2) 00:08:18.213 3881.748 - 3906.954: 0.4204% ( 2) 00:08:18.213 3906.954 - 3932.160: 0.4344% ( 2) 00:08:18.213 3932.160 - 3957.366: 0.4484% ( 2) 00:08:18.213 6251.126 - 6276.332: 0.4554% ( 1) 00:08:18.213 6276.332 - 6301.538: 0.4695% ( 2) 00:08:18.213 6301.538 - 6326.745: 0.4835% ( 2) 00:08:18.213 6326.745 - 6351.951: 0.5045% ( 3) 00:08:18.213 6351.951 - 6377.157: 0.5395% ( 5) 00:08:18.213 6377.157 - 6402.363: 0.5816% ( 6) 00:08:18.213 6402.363 - 6427.569: 0.6726% ( 13) 00:08:18.213 6427.569 - 6452.775: 0.7427% ( 10) 00:08:18.213 6452.775 - 6503.188: 0.7988% ( 8) 00:08:18.213 6503.188 - 6553.600: 0.8268% ( 4) 00:08:18.213 6553.600 - 6604.012: 0.8548% ( 4) 00:08:18.213 6604.012 - 6654.425: 0.8899% ( 5) 00:08:18.213 6654.425 - 6704.837: 0.9039% ( 2) 00:08:18.213 6704.837 - 6755.249: 0.9179% ( 2) 00:08:18.213 6755.249 - 6805.662: 0.9529% ( 5) 00:08:18.213 6805.662 - 6856.074: 0.9950% ( 6) 00:08:18.213 6856.074 - 6906.486: 1.0650% ( 10) 00:08:18.213 6906.486 - 6956.898: 1.2612% ( 28) 00:08:18.213 6956.898 - 7007.311: 1.3383% ( 11) 00:08:18.213 7007.311 - 7057.723: 1.4154% ( 11) 00:08:18.213 7057.723 - 7108.135: 1.4994% ( 12) 00:08:18.213 7108.135 - 7158.548: 1.6186% ( 17) 00:08:18.213 7158.548 - 7208.960: 1.7727% ( 22) 00:08:18.213 7208.960 - 7259.372: 2.0109% ( 34) 00:08:18.213 7259.372 - 7309.785: 2.0950% ( 12) 00:08:18.213 7309.785 - 7360.197: 2.1651% ( 10) 00:08:18.213 7360.197 - 7410.609: 2.2772% ( 16) 00:08:18.213 7410.609 - 7461.022: 2.4033% ( 18) 00:08:18.213 7461.022 - 7511.434: 2.5855% ( 26) 00:08:18.213 7511.434 - 7561.846: 2.6345% ( 7) 00:08:18.213 7561.846 - 7612.258: 2.6626% ( 4) 00:08:18.213 7612.258 - 7662.671: 2.6836% ( 3) 00:08:18.213 7662.671 - 7713.083: 2.6976% ( 2) 00:08:18.213 7713.083 - 7763.495: 2.7116% ( 2) 00:08:18.213 7763.495 - 7813.908: 2.8447% ( 19) 00:08:18.213 7813.908 - 7864.320: 3.0760% ( 33) 00:08:18.213 7864.320 - 7914.732: 3.6365% ( 80) 00:08:18.213 7914.732 - 7965.145: 4.2601% ( 89) 00:08:18.213 7965.145 - 8015.557: 5.7175% ( 208) 00:08:18.213 8015.557 - 8065.969: 7.7985% ( 297) 00:08:18.213 8065.969 - 8116.382: 10.1878% ( 341) 00:08:18.213 8116.382 - 8166.794: 13.0746% ( 412) 00:08:18.213 8166.794 - 8217.206: 16.8862% ( 544) 00:08:18.213 8217.206 - 8267.618: 20.6558% ( 538) 00:08:18.213 8267.618 - 8318.031: 24.4535% ( 542) 00:08:18.213 8318.031 - 8368.443: 28.3842% ( 561) 00:08:18.213 8368.443 - 8418.855: 32.7214% ( 619) 00:08:18.213 8418.855 - 8469.268: 36.8274% ( 586) 00:08:18.213 8469.268 - 8519.680: 41.1785% ( 621) 00:08:18.213 8519.680 - 8570.092: 45.3615% ( 597) 00:08:18.213 8570.092 - 8620.505: 49.5165% ( 593) 00:08:18.213 8620.505 - 8670.917: 53.9168% ( 628) 00:08:18.213 8670.917 - 8721.329: 58.3871% ( 638) 00:08:18.213 8721.329 - 8771.742: 62.5841% ( 599) 00:08:18.213 8771.742 - 8822.154: 66.0174% ( 490) 00:08:18.213 8822.154 - 8872.566: 69.2195% ( 457) 00:08:18.213 8872.566 - 8922.978: 71.9871% ( 395) 00:08:18.213 8922.978 - 8973.391: 74.1802% ( 313) 00:08:18.213 8973.391 - 9023.803: 76.4224% ( 320) 00:08:18.213 9023.803 - 9074.215: 78.0900% ( 238) 00:08:18.213 9074.215 - 9124.628: 79.5824% ( 213) 00:08:18.213 9124.628 - 9175.040: 81.0678% ( 212) 00:08:18.213 9175.040 - 9225.452: 82.3290% ( 180) 00:08:18.213 9225.452 - 9275.865: 83.3590% ( 147) 00:08:18.213 9275.865 - 9326.277: 84.4240% ( 152) 00:08:18.213 9326.277 - 9376.689: 85.2999% ( 125) 00:08:18.213 9376.689 - 9427.102: 85.8814% ( 83) 00:08:18.213 9427.102 - 9477.514: 86.4770% ( 85) 00:08:18.213 9477.514 - 9527.926: 87.1006% ( 89) 00:08:18.213 9527.926 - 9578.338: 87.7102% ( 87) 00:08:18.213 9578.338 - 9628.751: 88.2777% ( 81) 00:08:18.213 9628.751 - 9679.163: 88.9013% ( 89) 00:08:18.213 9679.163 - 9729.575: 89.4409% ( 77) 00:08:18.213 9729.575 - 9779.988: 89.9734% ( 76) 00:08:18.213 9779.988 - 9830.400: 90.3377% ( 52) 00:08:18.213 9830.400 - 9880.812: 90.6460% ( 44) 00:08:18.213 9880.812 - 9931.225: 90.8913% ( 35) 00:08:18.213 9931.225 - 9981.637: 91.1155% ( 32) 00:08:18.213 9981.637 - 10032.049: 91.3397% ( 32) 00:08:18.213 10032.049 - 10082.462: 91.5008% ( 23) 00:08:18.213 10082.462 - 10132.874: 91.6059% ( 15) 00:08:18.213 10132.874 - 10183.286: 91.7180% ( 16) 00:08:18.213 10183.286 - 10233.698: 91.9072% ( 27) 00:08:18.213 10233.698 - 10284.111: 92.1244% ( 31) 00:08:18.213 10284.111 - 10334.523: 92.4327% ( 44) 00:08:18.213 10334.523 - 10384.935: 92.8181% ( 55) 00:08:18.213 10384.935 - 10435.348: 93.2385% ( 60) 00:08:18.213 10435.348 - 10485.760: 93.6379% ( 57) 00:08:18.213 10485.760 - 10536.172: 93.8411% ( 29) 00:08:18.213 10536.172 - 10586.585: 94.0092% ( 24) 00:08:18.213 10586.585 - 10636.997: 94.2755% ( 38) 00:08:18.213 10636.997 - 10687.409: 94.4577% ( 26) 00:08:18.213 10687.409 - 10737.822: 94.6469% ( 27) 00:08:18.213 10737.822 - 10788.234: 94.7660% ( 17) 00:08:18.213 10788.234 - 10838.646: 94.8921% ( 18) 00:08:18.213 10838.646 - 10889.058: 95.0252% ( 19) 00:08:18.213 10889.058 - 10939.471: 95.1443% ( 17) 00:08:18.213 10939.471 - 10989.883: 95.2424% ( 14) 00:08:18.213 10989.883 - 11040.295: 95.3265% ( 12) 00:08:18.213 11040.295 - 11090.708: 95.4106% ( 12) 00:08:18.213 11090.708 - 11141.120: 95.4737% ( 9) 00:08:18.213 11141.120 - 11191.532: 95.5227% ( 7) 00:08:18.213 11191.532 - 11241.945: 95.5717% ( 7) 00:08:18.213 11241.945 - 11292.357: 95.6278% ( 8) 00:08:18.213 11292.357 - 11342.769: 95.6839% ( 8) 00:08:18.213 11342.769 - 11393.182: 95.7329% ( 7) 00:08:18.213 11393.182 - 11443.594: 95.7890% ( 8) 00:08:18.213 11443.594 - 11494.006: 95.8450% ( 8) 00:08:18.213 11494.006 - 11544.418: 95.8730% ( 4) 00:08:18.214 11544.418 - 11594.831: 95.9151% ( 6) 00:08:18.214 11594.831 - 11645.243: 95.9361% ( 3) 00:08:18.214 11645.243 - 11695.655: 95.9431% ( 1) 00:08:18.214 11695.655 - 11746.068: 95.9501% ( 1) 00:08:18.214 11746.068 - 11796.480: 95.9641% ( 2) 00:08:18.214 11796.480 - 11846.892: 96.0062% ( 6) 00:08:18.214 11846.892 - 11897.305: 96.0272% ( 3) 00:08:18.214 11897.305 - 11947.717: 96.0552% ( 4) 00:08:18.214 11947.717 - 11998.129: 96.1113% ( 8) 00:08:18.214 11998.129 - 12048.542: 96.1673% ( 8) 00:08:18.214 12048.542 - 12098.954: 96.2444% ( 11) 00:08:18.214 12098.954 - 12149.366: 96.3775% ( 19) 00:08:18.214 12149.366 - 12199.778: 96.4966% ( 17) 00:08:18.214 12199.778 - 12250.191: 96.6578% ( 23) 00:08:18.214 12250.191 - 12300.603: 96.7629% ( 15) 00:08:18.214 12300.603 - 12351.015: 96.8330% ( 10) 00:08:18.214 12351.015 - 12401.428: 96.8960% ( 9) 00:08:18.214 12401.428 - 12451.840: 96.9731% ( 11) 00:08:18.214 12451.840 - 12502.252: 97.0432% ( 10) 00:08:18.214 12502.252 - 12552.665: 97.1132% ( 10) 00:08:18.214 12552.665 - 12603.077: 97.2183% ( 15) 00:08:18.214 12603.077 - 12653.489: 97.3374% ( 17) 00:08:18.214 12653.489 - 12703.902: 97.4496% ( 16) 00:08:18.214 12703.902 - 12754.314: 97.5406% ( 13) 00:08:18.214 12754.314 - 12804.726: 97.6037% ( 9) 00:08:18.214 12804.726 - 12855.138: 97.6668% ( 9) 00:08:18.214 12855.138 - 12905.551: 97.6948% ( 4) 00:08:18.214 12905.551 - 13006.375: 97.7228% ( 4) 00:08:18.214 13006.375 - 13107.200: 97.7859% ( 9) 00:08:18.214 13107.200 - 13208.025: 97.8840% ( 14) 00:08:18.214 13208.025 - 13308.849: 97.9540% ( 10) 00:08:18.214 13308.849 - 13409.674: 97.9891% ( 5) 00:08:18.214 13409.674 - 13510.498: 98.0241% ( 5) 00:08:18.214 13510.498 - 13611.323: 98.0451% ( 3) 00:08:18.214 13611.323 - 13712.148: 98.0802% ( 5) 00:08:18.214 13712.148 - 13812.972: 98.1082% ( 4) 00:08:18.214 13812.972 - 13913.797: 98.1292% ( 3) 00:08:18.214 13913.797 - 14014.622: 98.1572% ( 4) 00:08:18.214 14014.622 - 14115.446: 98.1783% ( 3) 00:08:18.214 14115.446 - 14216.271: 98.1993% ( 3) 00:08:18.214 14216.271 - 14317.095: 98.2063% ( 1) 00:08:18.214 14317.095 - 14417.920: 98.2133% ( 1) 00:08:18.214 14619.569 - 14720.394: 98.2623% ( 7) 00:08:18.214 14720.394 - 14821.218: 98.3184% ( 8) 00:08:18.214 14821.218 - 14922.043: 98.4095% ( 13) 00:08:18.214 14922.043 - 15022.868: 98.4936% ( 12) 00:08:18.214 15022.868 - 15123.692: 98.6197% ( 18) 00:08:18.214 15123.692 - 15224.517: 98.6757% ( 8) 00:08:18.214 15224.517 - 15325.342: 98.7178% ( 6) 00:08:18.214 15325.342 - 15426.166: 98.7598% ( 6) 00:08:18.214 15426.166 - 15526.991: 98.8089% ( 7) 00:08:18.214 15526.991 - 15627.815: 98.8509% ( 6) 00:08:18.214 15627.815 - 15728.640: 98.9560% ( 15) 00:08:18.214 15728.640 - 15829.465: 99.0191% ( 9) 00:08:18.214 15829.465 - 15930.289: 99.0611% ( 6) 00:08:18.214 15930.289 - 16031.114: 99.0961% ( 5) 00:08:18.214 16031.114 - 16131.938: 99.1031% ( 1) 00:08:18.214 19055.852 - 19156.677: 99.1312% ( 4) 00:08:18.214 19156.677 - 19257.502: 99.1592% ( 4) 00:08:18.214 19257.502 - 19358.326: 99.1942% ( 5) 00:08:18.214 19358.326 - 19459.151: 99.2223% ( 4) 00:08:18.214 19459.151 - 19559.975: 99.2573% ( 5) 00:08:18.214 19559.975 - 19660.800: 99.2853% ( 4) 00:08:18.214 19660.800 - 19761.625: 99.3133% ( 4) 00:08:18.214 19761.625 - 19862.449: 99.3484% ( 5) 00:08:18.214 19862.449 - 19963.274: 99.3694% ( 3) 00:08:18.214 19963.274 - 20064.098: 99.3834% ( 2) 00:08:18.214 20064.098 - 20164.923: 99.4114% ( 4) 00:08:18.214 20164.923 - 20265.748: 99.4465% ( 5) 00:08:18.214 20265.748 - 20366.572: 99.4675% ( 3) 00:08:18.214 20366.572 - 20467.397: 99.4885% ( 3) 00:08:18.214 20467.397 - 20568.222: 99.5165% ( 4) 00:08:18.214 20568.222 - 20669.046: 99.5446% ( 4) 00:08:18.214 20669.046 - 20769.871: 99.5516% ( 1) 00:08:18.214 25306.978 - 25407.803: 99.5726% ( 3) 00:08:18.214 25407.803 - 25508.628: 99.5866% ( 2) 00:08:18.214 25508.628 - 25609.452: 99.6637% ( 11) 00:08:18.214 25609.452 - 25710.277: 99.6987% ( 5) 00:08:18.214 25811.102 - 26012.751: 99.7548% ( 8) 00:08:18.214 26012.751 - 26214.400: 99.8459% ( 13) 00:08:18.214 26214.400 - 26416.049: 99.9299% ( 12) 00:08:18.214 26416.049 - 26617.698: 100.0000% ( 10) 00:08:18.214 00:08:18.214 11:39:31 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:18.214 00:08:18.214 real 0m2.398s 00:08:18.214 user 0m2.143s 00:08:18.214 sys 0m0.156s 00:08:18.214 11:39:31 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.214 ************************************ 00:08:18.214 END TEST nvme_perf 00:08:18.214 ************************************ 00:08:18.214 11:39:31 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:18.214 11:39:31 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:18.214 11:39:31 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:18.214 11:39:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.214 11:39:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.214 ************************************ 00:08:18.214 START TEST nvme_hello_world 00:08:18.214 ************************************ 00:08:18.214 11:39:31 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:18.475 Initializing NVMe Controllers 00:08:18.475 Attached to 0000:00:10.0 00:08:18.475 Namespace ID: 1 size: 6GB 00:08:18.475 Attached to 0000:00:11.0 00:08:18.475 Namespace ID: 1 size: 5GB 00:08:18.475 Attached to 0000:00:13.0 00:08:18.475 Namespace ID: 1 size: 1GB 00:08:18.475 Attached to 0000:00:12.0 00:08:18.475 Namespace ID: 1 size: 4GB 00:08:18.475 Namespace ID: 2 size: 4GB 00:08:18.475 Namespace ID: 3 size: 4GB 00:08:18.475 Initialization complete. 00:08:18.475 INFO: using host memory buffer for IO 00:08:18.475 Hello world! 00:08:18.475 INFO: using host memory buffer for IO 00:08:18.475 Hello world! 00:08:18.475 INFO: using host memory buffer for IO 00:08:18.475 Hello world! 00:08:18.475 INFO: using host memory buffer for IO 00:08:18.475 Hello world! 00:08:18.475 INFO: using host memory buffer for IO 00:08:18.475 Hello world! 00:08:18.475 INFO: using host memory buffer for IO 00:08:18.475 Hello world! 00:08:18.475 00:08:18.475 real 0m0.172s 00:08:18.475 user 0m0.059s 00:08:18.475 sys 0m0.072s 00:08:18.475 11:39:31 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.475 ************************************ 00:08:18.475 END TEST nvme_hello_world 00:08:18.475 ************************************ 00:08:18.475 11:39:31 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:18.475 11:39:31 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:18.475 11:39:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.475 11:39:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.475 11:39:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.475 ************************************ 00:08:18.475 START TEST nvme_sgl 00:08:18.475 ************************************ 00:08:18.475 11:39:31 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:18.736 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:18.736 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:18.736 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:18.736 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:18.736 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:18.736 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:18.736 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:18.736 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:18.736 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:18.736 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:18.736 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:18.736 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:18.736 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:18.736 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:18.736 NVMe Readv/Writev Request test 00:08:18.736 Attached to 0000:00:10.0 00:08:18.736 Attached to 0000:00:11.0 00:08:18.736 Attached to 0000:00:13.0 00:08:18.736 Attached to 0000:00:12.0 00:08:18.736 0000:00:10.0: build_io_request_2 test passed 00:08:18.736 0000:00:10.0: build_io_request_4 test passed 00:08:18.736 0000:00:10.0: build_io_request_5 test passed 00:08:18.736 0000:00:10.0: build_io_request_6 test passed 00:08:18.736 0000:00:10.0: build_io_request_7 test passed 00:08:18.736 0000:00:10.0: build_io_request_10 test passed 00:08:18.736 0000:00:11.0: build_io_request_2 test passed 00:08:18.736 0000:00:11.0: build_io_request_4 test passed 00:08:18.737 0000:00:11.0: build_io_request_5 test passed 00:08:18.737 0000:00:11.0: build_io_request_6 test passed 00:08:18.737 0000:00:11.0: build_io_request_7 test passed 00:08:18.737 0000:00:11.0: build_io_request_10 test passed 00:08:18.737 Cleaning up... 00:08:18.737 00:08:18.737 real 0m0.227s 00:08:18.737 user 0m0.115s 00:08:18.737 sys 0m0.066s 00:08:18.737 11:39:31 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.737 11:39:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:18.737 ************************************ 00:08:18.737 END TEST nvme_sgl 00:08:18.737 ************************************ 00:08:18.737 11:39:32 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:18.737 11:39:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.737 11:39:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.737 11:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.737 ************************************ 00:08:18.737 START TEST nvme_e2edp 00:08:18.737 ************************************ 00:08:18.737 11:39:32 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:18.999 NVMe Write/Read with End-to-End data protection test 00:08:18.999 Attached to 0000:00:10.0 00:08:18.999 Attached to 0000:00:11.0 00:08:18.999 Attached to 0000:00:13.0 00:08:18.999 Attached to 0000:00:12.0 00:08:18.999 Cleaning up... 00:08:18.999 00:08:18.999 real 0m0.178s 00:08:18.999 user 0m0.050s 00:08:18.999 sys 0m0.082s 00:08:18.999 11:39:32 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.999 11:39:32 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 ************************************ 00:08:18.999 END TEST nvme_e2edp 00:08:18.999 ************************************ 00:08:18.999 11:39:32 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:18.999 11:39:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.999 11:39:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.999 11:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 ************************************ 00:08:18.999 START TEST nvme_reserve 00:08:18.999 ************************************ 00:08:18.999 11:39:32 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:18.999 ===================================================== 00:08:18.999 NVMe Controller at PCI bus 0, device 16, function 0 00:08:18.999 ===================================================== 00:08:18.999 Reservations: Not Supported 00:08:18.999 ===================================================== 00:08:18.999 NVMe Controller at PCI bus 0, device 17, function 0 00:08:18.999 ===================================================== 00:08:18.999 Reservations: Not Supported 00:08:18.999 ===================================================== 00:08:18.999 NVMe Controller at PCI bus 0, device 19, function 0 00:08:18.999 ===================================================== 00:08:18.999 Reservations: Not Supported 00:08:18.999 ===================================================== 00:08:18.999 NVMe Controller at PCI bus 0, device 18, function 0 00:08:18.999 ===================================================== 00:08:18.999 Reservations: Not Supported 00:08:18.999 Reservation test passed 00:08:18.999 00:08:18.999 real 0m0.169s 00:08:18.999 user 0m0.054s 00:08:18.999 sys 0m0.069s 00:08:18.999 11:39:32 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.999 11:39:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:18.999 ************************************ 00:08:18.999 END TEST nvme_reserve 00:08:18.999 ************************************ 00:08:19.261 11:39:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:19.261 11:39:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.261 11:39:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.261 11:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.261 ************************************ 00:08:19.261 START TEST nvme_err_injection 00:08:19.261 ************************************ 00:08:19.261 11:39:32 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:19.261 NVMe Error Injection test 00:08:19.261 Attached to 0000:00:10.0 00:08:19.261 Attached to 0000:00:11.0 00:08:19.261 Attached to 0000:00:13.0 00:08:19.261 Attached to 0000:00:12.0 00:08:19.261 0000:00:11.0: get features failed as expected 00:08:19.261 0000:00:13.0: get features failed as expected 00:08:19.261 0000:00:12.0: get features failed as expected 00:08:19.261 0000:00:10.0: get features failed as expected 00:08:19.261 0000:00:10.0: get features successfully as expected 00:08:19.261 0000:00:11.0: get features successfully as expected 00:08:19.261 0000:00:13.0: get features successfully as expected 00:08:19.261 0000:00:12.0: get features successfully as expected 00:08:19.261 0000:00:10.0: read failed as expected 00:08:19.261 0000:00:11.0: read failed as expected 00:08:19.261 0000:00:13.0: read failed as expected 00:08:19.261 0000:00:12.0: read failed as expected 00:08:19.261 0000:00:10.0: read successfully as expected 00:08:19.261 0000:00:11.0: read successfully as expected 00:08:19.261 0000:00:13.0: read successfully as expected 00:08:19.261 0000:00:12.0: read successfully as expected 00:08:19.261 Cleaning up... 00:08:19.261 00:08:19.261 real 0m0.164s 00:08:19.261 user 0m0.048s 00:08:19.261 sys 0m0.079s 00:08:19.261 11:39:32 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.261 11:39:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:19.261 ************************************ 00:08:19.261 END TEST nvme_err_injection 00:08:19.261 ************************************ 00:08:19.261 11:39:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:19.261 11:39:32 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:19.261 11:39:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.261 11:39:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.261 ************************************ 00:08:19.261 START TEST nvme_overhead 00:08:19.261 ************************************ 00:08:19.261 11:39:32 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:20.649 Initializing NVMe Controllers 00:08:20.649 Attached to 0000:00:10.0 00:08:20.649 Attached to 0000:00:11.0 00:08:20.649 Attached to 0000:00:13.0 00:08:20.649 Attached to 0000:00:12.0 00:08:20.649 Initialization complete. Launching workers. 00:08:20.649 submit (in ns) avg, min, max = 11611.7, 9994.6, 68554.6 00:08:20.649 complete (in ns) avg, min, max = 7806.5, 7288.5, 355215.4 00:08:20.649 00:08:20.649 Submit histogram 00:08:20.649 ================ 00:08:20.649 Range in us Cumulative Count 00:08:20.649 9.994 - 10.043: 0.0243% ( 2) 00:08:20.649 10.535 - 10.585: 0.0364% ( 1) 00:08:20.649 10.683 - 10.732: 0.0485% ( 1) 00:08:20.649 10.782 - 10.831: 0.1092% ( 5) 00:08:20.649 10.831 - 10.880: 0.2547% ( 12) 00:08:20.649 10.880 - 10.929: 0.6672% ( 34) 00:08:20.649 10.929 - 10.978: 2.6080% ( 160) 00:08:20.649 10.978 - 11.028: 7.3508% ( 391) 00:08:20.649 11.028 - 11.077: 15.9389% ( 708) 00:08:20.649 11.077 - 11.126: 27.2319% ( 931) 00:08:20.649 11.126 - 11.175: 39.7380% ( 1031) 00:08:20.649 11.175 - 11.225: 50.1820% ( 861) 00:08:20.649 11.225 - 11.274: 58.6487% ( 698) 00:08:20.649 11.274 - 11.323: 64.4469% ( 478) 00:08:20.649 11.323 - 11.372: 68.3770% ( 324) 00:08:20.649 11.372 - 11.422: 71.2033% ( 233) 00:08:20.649 11.422 - 11.471: 73.9447% ( 226) 00:08:20.649 11.471 - 11.520: 76.6982% ( 227) 00:08:20.649 11.520 - 11.569: 79.3183% ( 216) 00:08:20.649 11.569 - 11.618: 81.3319% ( 166) 00:08:20.649 11.618 - 11.668: 82.9573% ( 134) 00:08:20.649 11.668 - 11.717: 83.9034% ( 78) 00:08:20.649 11.717 - 11.766: 84.5585% ( 54) 00:08:20.649 11.766 - 11.815: 85.1286% ( 47) 00:08:20.649 11.815 - 11.865: 85.8443% ( 59) 00:08:20.649 11.865 - 11.914: 86.5842% ( 61) 00:08:20.649 11.914 - 11.963: 87.1543% ( 47) 00:08:20.649 11.963 - 12.012: 87.8700% ( 59) 00:08:20.649 12.012 - 12.062: 88.7797% ( 75) 00:08:20.649 12.062 - 12.111: 89.7380% ( 79) 00:08:20.649 12.111 - 12.160: 90.4537% ( 59) 00:08:20.649 12.160 - 12.209: 91.1693% ( 59) 00:08:20.649 12.209 - 12.258: 91.7758% ( 50) 00:08:20.649 12.258 - 12.308: 92.2368% ( 38) 00:08:20.649 12.308 - 12.357: 92.6613% ( 35) 00:08:20.649 12.357 - 12.406: 92.9767% ( 26) 00:08:20.649 12.406 - 12.455: 93.1829% ( 17) 00:08:20.649 12.455 - 12.505: 93.4862% ( 25) 00:08:20.649 12.505 - 12.554: 93.5832% ( 8) 00:08:20.649 12.554 - 12.603: 93.6560% ( 6) 00:08:20.649 12.603 - 12.702: 93.7409% ( 7) 00:08:20.649 12.702 - 12.800: 93.8016% ( 5) 00:08:20.649 12.800 - 12.898: 93.8379% ( 3) 00:08:20.649 12.898 - 12.997: 93.8501% ( 1) 00:08:20.649 12.997 - 13.095: 93.8743% ( 2) 00:08:20.649 13.095 - 13.194: 93.9107% ( 3) 00:08:20.649 13.194 - 13.292: 93.9835% ( 6) 00:08:20.650 13.292 - 13.391: 94.1291% ( 12) 00:08:20.650 13.391 - 13.489: 94.3353% ( 17) 00:08:20.650 13.489 - 13.588: 94.5172% ( 15) 00:08:20.650 13.588 - 13.686: 94.6870% ( 14) 00:08:20.650 13.686 - 13.785: 94.9296% ( 20) 00:08:20.650 13.785 - 13.883: 95.1359% ( 17) 00:08:20.650 13.883 - 13.982: 95.3785% ( 20) 00:08:20.650 13.982 - 14.080: 95.6453% ( 22) 00:08:20.650 14.080 - 14.178: 95.7909% ( 12) 00:08:20.650 14.178 - 14.277: 95.9364% ( 12) 00:08:20.650 14.277 - 14.375: 96.0092% ( 6) 00:08:20.650 14.375 - 14.474: 96.1063% ( 8) 00:08:20.650 14.474 - 14.572: 96.1912% ( 7) 00:08:20.650 14.572 - 14.671: 96.3125% ( 10) 00:08:20.650 14.671 - 14.769: 96.3731% ( 5) 00:08:20.650 14.769 - 14.868: 96.4823% ( 9) 00:08:20.650 14.868 - 14.966: 96.5551% ( 6) 00:08:20.650 14.966 - 15.065: 96.6279% ( 6) 00:08:20.650 15.065 - 15.163: 96.7128% ( 7) 00:08:20.650 15.163 - 15.262: 96.7977% ( 7) 00:08:20.650 15.262 - 15.360: 96.9068% ( 9) 00:08:20.650 15.360 - 15.458: 96.9918% ( 7) 00:08:20.650 15.458 - 15.557: 97.0645% ( 6) 00:08:20.650 15.557 - 15.655: 97.0888% ( 2) 00:08:20.650 15.655 - 15.754: 97.1494% ( 5) 00:08:20.650 15.754 - 15.852: 97.2101% ( 5) 00:08:20.650 15.852 - 15.951: 97.3071% ( 8) 00:08:20.650 15.951 - 16.049: 97.3678% ( 5) 00:08:20.650 16.049 - 16.148: 97.4284% ( 5) 00:08:20.650 16.148 - 16.246: 97.5255% ( 8) 00:08:20.650 16.246 - 16.345: 97.5861% ( 5) 00:08:20.650 16.345 - 16.443: 97.6589% ( 6) 00:08:20.650 16.443 - 16.542: 97.7438% ( 7) 00:08:20.650 16.542 - 16.640: 97.8287% ( 7) 00:08:20.650 16.640 - 16.738: 97.9622% ( 11) 00:08:20.650 16.738 - 16.837: 98.0228% ( 5) 00:08:20.650 16.837 - 16.935: 98.1562% ( 11) 00:08:20.650 16.935 - 17.034: 98.3018% ( 12) 00:08:20.650 17.034 - 17.132: 98.4595% ( 13) 00:08:20.650 17.132 - 17.231: 98.5929% ( 11) 00:08:20.650 17.231 - 17.329: 98.7385% ( 12) 00:08:20.650 17.329 - 17.428: 98.8598% ( 10) 00:08:20.650 17.428 - 17.526: 98.9447% ( 7) 00:08:20.650 17.526 - 17.625: 99.0781% ( 11) 00:08:20.650 17.625 - 17.723: 99.1266% ( 4) 00:08:20.650 17.723 - 17.822: 99.1752% ( 4) 00:08:20.650 17.822 - 17.920: 99.3086% ( 11) 00:08:20.650 17.920 - 18.018: 99.3450% ( 3) 00:08:20.650 18.018 - 18.117: 99.3692% ( 2) 00:08:20.650 18.117 - 18.215: 99.4056% ( 3) 00:08:20.650 18.215 - 18.314: 99.4178% ( 1) 00:08:20.650 18.314 - 18.412: 99.4420% ( 2) 00:08:20.650 18.412 - 18.511: 99.4663% ( 2) 00:08:20.650 18.511 - 18.609: 99.4784% ( 1) 00:08:20.650 18.609 - 18.708: 99.4905% ( 1) 00:08:20.650 18.806 - 18.905: 99.5269% ( 3) 00:08:20.650 18.905 - 19.003: 99.5391% ( 1) 00:08:20.650 19.102 - 19.200: 99.5512% ( 1) 00:08:20.650 19.200 - 19.298: 99.5876% ( 3) 00:08:20.650 19.397 - 19.495: 99.5997% ( 1) 00:08:20.650 19.495 - 19.594: 99.6118% ( 1) 00:08:20.650 19.692 - 19.791: 99.6240% ( 1) 00:08:20.650 19.988 - 20.086: 99.6482% ( 2) 00:08:20.650 20.086 - 20.185: 99.6604% ( 1) 00:08:20.650 20.283 - 20.382: 99.6725% ( 1) 00:08:20.650 20.382 - 20.480: 99.6846% ( 1) 00:08:20.650 20.480 - 20.578: 99.6967% ( 1) 00:08:20.650 20.578 - 20.677: 99.7089% ( 1) 00:08:20.650 20.775 - 20.874: 99.7210% ( 1) 00:08:20.650 20.874 - 20.972: 99.7453% ( 2) 00:08:20.650 21.268 - 21.366: 99.7574% ( 1) 00:08:20.650 21.366 - 21.465: 99.7695% ( 1) 00:08:20.650 21.465 - 21.563: 99.7817% ( 1) 00:08:20.650 21.563 - 21.662: 99.8059% ( 2) 00:08:20.650 21.858 - 21.957: 99.8180% ( 1) 00:08:20.650 21.957 - 22.055: 99.8302% ( 1) 00:08:20.650 22.154 - 22.252: 99.8423% ( 1) 00:08:20.650 22.252 - 22.351: 99.8544% ( 1) 00:08:20.650 22.351 - 22.449: 99.8666% ( 1) 00:08:20.650 22.843 - 22.942: 99.8787% ( 1) 00:08:20.650 23.040 - 23.138: 99.8908% ( 1) 00:08:20.650 24.418 - 24.517: 99.9030% ( 1) 00:08:20.650 25.994 - 26.191: 99.9151% ( 1) 00:08:20.650 27.766 - 27.963: 99.9272% ( 1) 00:08:20.650 31.508 - 31.705: 99.9393% ( 1) 00:08:20.650 34.265 - 34.462: 99.9515% ( 1) 00:08:20.650 43.323 - 43.520: 99.9636% ( 1) 00:08:20.650 47.065 - 47.262: 99.9757% ( 1) 00:08:20.650 66.560 - 66.954: 99.9879% ( 1) 00:08:20.650 68.529 - 68.923: 100.0000% ( 1) 00:08:20.650 00:08:20.650 Complete histogram 00:08:20.650 ================== 00:08:20.650 Range in us Cumulative Count 00:08:20.650 7.286 - 7.335: 0.2183% ( 18) 00:08:20.650 7.335 - 7.385: 2.7293% ( 207) 00:08:20.650 7.385 - 7.434: 13.2581% ( 868) 00:08:20.650 7.434 - 7.483: 31.6958% ( 1520) 00:08:20.650 7.483 - 7.532: 50.6186% ( 1560) 00:08:20.650 7.532 - 7.582: 64.4954% ( 1144) 00:08:20.650 7.582 - 7.631: 73.1926% ( 717) 00:08:20.650 7.631 - 7.680: 78.4085% ( 430) 00:08:20.650 7.680 - 7.729: 81.2227% ( 232) 00:08:20.650 7.729 - 7.778: 82.9937% ( 146) 00:08:20.650 7.778 - 7.828: 83.6487% ( 54) 00:08:20.650 7.828 - 7.877: 84.1460% ( 41) 00:08:20.650 7.877 - 7.926: 84.4614% ( 26) 00:08:20.650 7.926 - 7.975: 84.8132% ( 29) 00:08:20.650 7.975 - 8.025: 85.7230% ( 75) 00:08:20.650 8.025 - 8.074: 87.1543% ( 118) 00:08:20.650 8.074 - 8.123: 89.1194% ( 162) 00:08:20.650 8.123 - 8.172: 91.2542% ( 176) 00:08:20.650 8.172 - 8.222: 93.2921% ( 168) 00:08:20.650 8.222 - 8.271: 94.7598% ( 121) 00:08:20.650 8.271 - 8.320: 95.7424% ( 81) 00:08:20.650 8.320 - 8.369: 96.4580% ( 59) 00:08:20.650 8.369 - 8.418: 97.0403% ( 48) 00:08:20.650 8.418 - 8.468: 97.3678% ( 27) 00:08:20.650 8.468 - 8.517: 97.6104% ( 20) 00:08:20.650 8.517 - 8.566: 97.7802% ( 14) 00:08:20.650 8.566 - 8.615: 97.8287% ( 4) 00:08:20.650 8.615 - 8.665: 97.8530% ( 2) 00:08:20.650 8.665 - 8.714: 97.9015% ( 4) 00:08:20.650 8.714 - 8.763: 97.9379% ( 3) 00:08:20.650 8.763 - 8.812: 97.9500% ( 1) 00:08:20.650 8.862 - 8.911: 97.9622% ( 1) 00:08:20.650 8.911 - 8.960: 97.9985% ( 3) 00:08:20.650 8.960 - 9.009: 98.0471% ( 4) 00:08:20.650 9.009 - 9.058: 98.0713% ( 2) 00:08:20.650 9.108 - 9.157: 98.0956% ( 2) 00:08:20.650 9.157 - 9.206: 98.1198% ( 2) 00:08:20.650 9.206 - 9.255: 98.1441% ( 2) 00:08:20.650 9.255 - 9.305: 98.1805% ( 3) 00:08:20.650 9.305 - 9.354: 98.1926% ( 1) 00:08:20.650 9.354 - 9.403: 98.2048% ( 1) 00:08:20.650 9.403 - 9.452: 98.2169% ( 1) 00:08:20.650 9.452 - 9.502: 98.2290% ( 1) 00:08:20.650 9.502 - 9.551: 98.2533% ( 2) 00:08:20.650 9.698 - 9.748: 98.2654% ( 1) 00:08:20.650 10.092 - 10.142: 98.2775% ( 1) 00:08:20.650 10.289 - 10.338: 98.2897% ( 1) 00:08:20.650 10.338 - 10.388: 98.3018% ( 1) 00:08:20.650 10.388 - 10.437: 98.3139% ( 1) 00:08:20.650 10.437 - 10.486: 98.3261% ( 1) 00:08:20.650 10.732 - 10.782: 98.3503% ( 2) 00:08:20.650 10.880 - 10.929: 98.3624% ( 1) 00:08:20.650 10.929 - 10.978: 98.3867% ( 2) 00:08:20.650 11.717 - 11.766: 98.4110% ( 2) 00:08:20.650 11.963 - 12.012: 98.4231% ( 1) 00:08:20.650 12.160 - 12.209: 98.4474% ( 2) 00:08:20.650 12.308 - 12.357: 98.4595% ( 1) 00:08:20.650 12.455 - 12.505: 98.4837% ( 2) 00:08:20.650 12.505 - 12.554: 98.4959% ( 1) 00:08:20.650 12.702 - 12.800: 98.5201% ( 2) 00:08:20.650 12.898 - 12.997: 98.5323% ( 1) 00:08:20.650 12.997 - 13.095: 98.6414% ( 9) 00:08:20.650 13.095 - 13.194: 98.7263% ( 7) 00:08:20.650 13.194 - 13.292: 98.7627% ( 3) 00:08:20.650 13.292 - 13.391: 98.7991% ( 3) 00:08:20.650 13.391 - 13.489: 98.9326% ( 11) 00:08:20.650 13.489 - 13.588: 99.0296% ( 8) 00:08:20.650 13.588 - 13.686: 99.1752% ( 12) 00:08:20.650 13.686 - 13.785: 99.3814% ( 17) 00:08:20.650 13.785 - 13.883: 99.5148% ( 11) 00:08:20.650 13.883 - 13.982: 99.5391% ( 2) 00:08:20.650 13.982 - 14.080: 99.5876% ( 4) 00:08:20.650 14.080 - 14.178: 99.6240% ( 3) 00:08:20.650 14.178 - 14.277: 99.6604% ( 3) 00:08:20.650 14.277 - 14.375: 99.6725% ( 1) 00:08:20.650 14.375 - 14.474: 99.7089% ( 3) 00:08:20.650 14.474 - 14.572: 99.7331% ( 2) 00:08:20.650 14.572 - 14.671: 99.7574% ( 2) 00:08:20.650 14.671 - 14.769: 99.7695% ( 1) 00:08:20.650 14.868 - 14.966: 99.7938% ( 2) 00:08:20.650 14.966 - 15.065: 99.8059% ( 1) 00:08:20.650 15.065 - 15.163: 99.8180% ( 1) 00:08:20.650 16.837 - 16.935: 99.8302% ( 1) 00:08:20.650 16.935 - 17.034: 99.8423% ( 1) 00:08:20.650 17.428 - 17.526: 99.8544% ( 1) 00:08:20.650 17.920 - 18.018: 99.8666% ( 1) 00:08:20.650 18.018 - 18.117: 99.8787% ( 1) 00:08:20.650 18.117 - 18.215: 99.8908% ( 1) 00:08:20.650 20.578 - 20.677: 99.9030% ( 1) 00:08:20.650 22.154 - 22.252: 99.9151% ( 1) 00:08:20.650 22.351 - 22.449: 99.9272% ( 1) 00:08:20.650 22.548 - 22.646: 99.9393% ( 1) 00:08:20.650 23.335 - 23.434: 99.9515% ( 1) 00:08:20.651 42.338 - 42.535: 99.9636% ( 1) 00:08:20.651 55.138 - 55.532: 99.9757% ( 1) 00:08:20.651 242.609 - 244.185: 99.9879% ( 1) 00:08:20.651 354.462 - 356.037: 100.0000% ( 1) 00:08:20.651 00:08:20.651 00:08:20.651 real 0m1.173s 00:08:20.651 user 0m1.053s 00:08:20.651 sys 0m0.073s 00:08:20.651 11:39:33 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.651 ************************************ 00:08:20.651 END TEST nvme_overhead 00:08:20.651 ************************************ 00:08:20.651 11:39:33 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:20.651 11:39:33 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:20.651 11:39:33 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:20.651 11:39:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.651 11:39:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.651 ************************************ 00:08:20.651 START TEST nvme_arbitration 00:08:20.651 ************************************ 00:08:20.651 11:39:33 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:23.948 Initializing NVMe Controllers 00:08:23.948 Attached to 0000:00:10.0 00:08:23.948 Attached to 0000:00:11.0 00:08:23.948 Attached to 0000:00:13.0 00:08:23.948 Attached to 0000:00:12.0 00:08:23.948 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:23.948 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:23.948 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:23.948 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:23.948 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:23.948 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:23.948 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:23.948 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:23.948 Initialization complete. Launching workers. 00:08:23.948 Starting thread on core 1 with urgent priority queue 00:08:23.948 Starting thread on core 2 with urgent priority queue 00:08:23.948 Starting thread on core 3 with urgent priority queue 00:08:23.948 Starting thread on core 0 with urgent priority queue 00:08:23.948 QEMU NVMe Ctrl (12340 ) core 0: 6869.33 IO/s 14.56 secs/100000 ios 00:08:23.948 QEMU NVMe Ctrl (12342 ) core 0: 6869.33 IO/s 14.56 secs/100000 ios 00:08:23.948 QEMU NVMe Ctrl (12341 ) core 1: 6613.33 IO/s 15.12 secs/100000 ios 00:08:23.948 QEMU NVMe Ctrl (12342 ) core 1: 6613.33 IO/s 15.12 secs/100000 ios 00:08:23.948 QEMU NVMe Ctrl (12343 ) core 2: 5973.33 IO/s 16.74 secs/100000 ios 00:08:23.948 QEMU NVMe Ctrl (12342 ) core 3: 6421.33 IO/s 15.57 secs/100000 ios 00:08:23.948 ======================================================== 00:08:23.948 00:08:23.948 00:08:23.948 real 0m3.195s 00:08:23.948 user 0m9.010s 00:08:23.948 sys 0m0.085s 00:08:23.948 ************************************ 00:08:23.948 END TEST nvme_arbitration 00:08:23.948 ************************************ 00:08:23.948 11:39:37 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.948 11:39:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:23.948 11:39:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:23.948 11:39:37 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:23.948 11:39:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.948 11:39:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.948 ************************************ 00:08:23.948 START TEST nvme_single_aen 00:08:23.948 ************************************ 00:08:23.948 11:39:37 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:23.948 Asynchronous Event Request test 00:08:23.948 Attached to 0000:00:10.0 00:08:23.948 Attached to 0000:00:11.0 00:08:23.948 Attached to 0000:00:13.0 00:08:23.948 Attached to 0000:00:12.0 00:08:23.948 Reset controller to setup AER completions for this process 00:08:23.948 Registering asynchronous event callbacks... 00:08:23.948 Getting orig temperature thresholds of all controllers 00:08:23.948 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.948 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.948 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.948 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:23.948 Setting all controllers temperature threshold low to trigger AER 00:08:23.948 Waiting for all controllers temperature threshold to be set lower 00:08:23.948 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.948 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:23.948 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.948 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:23.948 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.948 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:23.948 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:23.948 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:23.948 Waiting for all controllers to trigger AER and reset threshold 00:08:23.948 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.948 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.948 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.948 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.948 Cleaning up... 00:08:23.948 00:08:23.948 real 0m0.178s 00:08:23.948 user 0m0.055s 00:08:23.948 sys 0m0.078s 00:08:23.948 ************************************ 00:08:23.948 END TEST nvme_single_aen 00:08:23.948 ************************************ 00:08:23.948 11:39:37 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.948 11:39:37 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:23.948 11:39:37 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:23.948 11:39:37 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.948 11:39:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.948 11:39:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.948 ************************************ 00:08:23.948 START TEST nvme_doorbell_aers 00:08:23.948 ************************************ 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:23.948 11:39:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:24.209 [2024-11-19 11:39:37.510054] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:08:34.213 Executing: test_write_invalid_db 00:08:34.213 Waiting for AER completion... 00:08:34.213 Failure: test_write_invalid_db 00:08:34.213 00:08:34.213 Executing: test_invalid_db_write_overflow_sq 00:08:34.213 Waiting for AER completion... 00:08:34.213 Failure: test_invalid_db_write_overflow_sq 00:08:34.213 00:08:34.213 Executing: test_invalid_db_write_overflow_cq 00:08:34.213 Waiting for AER completion... 00:08:34.213 Failure: test_invalid_db_write_overflow_cq 00:08:34.213 00:08:34.213 11:39:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:34.213 11:39:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:34.213 [2024-11-19 11:39:47.522115] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:08:44.194 Executing: test_write_invalid_db 00:08:44.194 Waiting for AER completion... 00:08:44.194 Failure: test_write_invalid_db 00:08:44.194 00:08:44.194 Executing: test_invalid_db_write_overflow_sq 00:08:44.194 Waiting for AER completion... 00:08:44.194 Failure: test_invalid_db_write_overflow_sq 00:08:44.194 00:08:44.194 Executing: test_invalid_db_write_overflow_cq 00:08:44.194 Waiting for AER completion... 00:08:44.194 Failure: test_invalid_db_write_overflow_cq 00:08:44.194 00:08:44.194 11:39:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:44.194 11:39:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:44.194 [2024-11-19 11:39:57.556217] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:08:54.163 Executing: test_write_invalid_db 00:08:54.163 Waiting for AER completion... 00:08:54.163 Failure: test_write_invalid_db 00:08:54.163 00:08:54.163 Executing: test_invalid_db_write_overflow_sq 00:08:54.163 Waiting for AER completion... 00:08:54.163 Failure: test_invalid_db_write_overflow_sq 00:08:54.163 00:08:54.163 Executing: test_invalid_db_write_overflow_cq 00:08:54.163 Waiting for AER completion... 00:08:54.163 Failure: test_invalid_db_write_overflow_cq 00:08:54.163 00:08:54.163 11:40:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:54.163 11:40:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:54.420 [2024-11-19 11:40:07.584398] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 Executing: test_write_invalid_db 00:09:04.391 Waiting for AER completion... 00:09:04.391 Failure: test_write_invalid_db 00:09:04.391 00:09:04.391 Executing: test_invalid_db_write_overflow_sq 00:09:04.391 Waiting for AER completion... 00:09:04.391 Failure: test_invalid_db_write_overflow_sq 00:09:04.391 00:09:04.391 Executing: test_invalid_db_write_overflow_cq 00:09:04.391 Waiting for AER completion... 00:09:04.391 Failure: test_invalid_db_write_overflow_cq 00:09:04.391 00:09:04.391 00:09:04.391 real 0m40.165s 00:09:04.391 user 0m34.227s 00:09:04.391 sys 0m5.585s 00:09:04.391 11:40:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.391 11:40:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:04.391 ************************************ 00:09:04.391 END TEST nvme_doorbell_aers 00:09:04.391 ************************************ 00:09:04.391 11:40:17 nvme -- nvme/nvme.sh@97 -- # uname 00:09:04.391 11:40:17 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:04.391 11:40:17 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:04.391 11:40:17 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:04.391 11:40:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.391 11:40:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.391 ************************************ 00:09:04.391 START TEST nvme_multi_aen 00:09:04.391 ************************************ 00:09:04.391 11:40:17 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:04.391 [2024-11-19 11:40:17.631351] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.631453] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.631470] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.632820] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.632850] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.632860] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.633807] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.633833] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.633842] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.634943] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.635088] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 [2024-11-19 11:40:17.635160] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75028) is not found. Dropping the request. 00:09:04.391 Child process pid: 75554 00:09:04.391 [Child] Asynchronous Event Request test 00:09:04.391 [Child] Attached to 0000:00:10.0 00:09:04.391 [Child] Attached to 0000:00:11.0 00:09:04.391 [Child] Attached to 0000:00:13.0 00:09:04.391 [Child] Attached to 0000:00:12.0 00:09:04.391 [Child] Registering asynchronous event callbacks... 00:09:04.391 [Child] Getting orig temperature thresholds of all controllers 00:09:04.391 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.391 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.391 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.391 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.391 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:04.392 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.392 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.392 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.392 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.392 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.392 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.392 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.392 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.392 [Child] Cleaning up... 00:09:04.650 Asynchronous Event Request test 00:09:04.650 Attached to 0000:00:10.0 00:09:04.650 Attached to 0000:00:11.0 00:09:04.650 Attached to 0000:00:13.0 00:09:04.650 Attached to 0000:00:12.0 00:09:04.650 Reset controller to setup AER completions for this process 00:09:04.650 Registering asynchronous event callbacks... 00:09:04.650 Getting orig temperature thresholds of all controllers 00:09:04.650 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.650 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.650 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.650 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.650 Setting all controllers temperature threshold low to trigger AER 00:09:04.650 Waiting for all controllers temperature threshold to be set lower 00:09:04.650 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.650 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:04.651 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.651 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:04.651 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.651 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:04.651 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.651 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:04.651 Waiting for all controllers to trigger AER and reset threshold 00:09:04.651 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.651 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.651 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.651 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.651 Cleaning up... 00:09:04.651 00:09:04.651 real 0m0.325s 00:09:04.651 user 0m0.096s 00:09:04.651 sys 0m0.141s 00:09:04.651 11:40:17 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.651 11:40:17 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:04.651 ************************************ 00:09:04.651 END TEST nvme_multi_aen 00:09:04.651 ************************************ 00:09:04.651 11:40:17 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:04.651 11:40:17 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:04.651 11:40:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.651 11:40:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.651 ************************************ 00:09:04.651 START TEST nvme_startup 00:09:04.651 ************************************ 00:09:04.651 11:40:17 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:04.651 Initializing NVMe Controllers 00:09:04.651 Attached to 0000:00:10.0 00:09:04.651 Attached to 0000:00:11.0 00:09:04.651 Attached to 0000:00:13.0 00:09:04.651 Attached to 0000:00:12.0 00:09:04.651 Initialization complete. 00:09:04.651 Time used:112122.266 (us). 00:09:04.651 00:09:04.651 real 0m0.157s 00:09:04.651 user 0m0.049s 00:09:04.651 sys 0m0.067s 00:09:04.651 11:40:18 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.651 11:40:18 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:04.651 ************************************ 00:09:04.651 END TEST nvme_startup 00:09:04.651 ************************************ 00:09:04.651 11:40:18 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:04.651 11:40:18 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.651 11:40:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.651 11:40:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.651 ************************************ 00:09:04.651 START TEST nvme_multi_secondary 00:09:04.651 ************************************ 00:09:04.651 11:40:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:04.651 11:40:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75599 00:09:04.651 11:40:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75600 00:09:04.651 11:40:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:04.651 11:40:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:04.651 11:40:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:07.945 Initializing NVMe Controllers 00:09:07.945 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.946 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.946 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.946 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.946 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:07.946 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:07.946 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:07.946 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:07.946 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:07.946 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:07.946 Initialization complete. Launching workers. 00:09:07.946 ======================================================== 00:09:07.946 Latency(us) 00:09:07.946 Device Information : IOPS MiB/s Average min max 00:09:07.946 PCIE (0000:00:10.0) NSID 1 from core 1: 7369.29 28.79 2169.83 770.20 5430.51 00:09:07.946 PCIE (0000:00:11.0) NSID 1 from core 1: 7370.29 28.79 2170.50 791.64 5067.21 00:09:07.946 PCIE (0000:00:13.0) NSID 1 from core 1: 7370.95 28.79 2170.64 754.35 5236.74 00:09:07.946 PCIE (0000:00:12.0) NSID 1 from core 1: 7371.29 28.79 2170.82 775.37 5645.49 00:09:07.946 PCIE (0000:00:12.0) NSID 2 from core 1: 7368.29 28.78 2172.01 780.33 5342.44 00:09:07.946 PCIE (0000:00:12.0) NSID 3 from core 1: 7370.95 28.79 2171.44 780.56 5535.62 00:09:07.946 ======================================================== 00:09:07.946 Total : 44221.06 172.74 2170.87 754.35 5645.49 00:09:07.946 00:09:07.946 Initializing NVMe Controllers 00:09:07.946 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.946 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.946 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.946 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.946 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:07.946 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:07.946 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:07.946 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:07.946 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:07.946 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:07.946 Initialization complete. Launching workers. 00:09:07.946 ======================================================== 00:09:07.946 Latency(us) 00:09:07.946 Device Information : IOPS MiB/s Average min max 00:09:07.946 PCIE (0000:00:10.0) NSID 1 from core 2: 2986.32 11.67 5356.48 1133.09 12777.57 00:09:07.946 PCIE (0000:00:11.0) NSID 1 from core 2: 2986.32 11.67 5357.66 1163.79 13107.35 00:09:07.946 PCIE (0000:00:13.0) NSID 1 from core 2: 2986.32 11.67 5357.72 1053.17 13275.81 00:09:07.946 PCIE (0000:00:12.0) NSID 1 from core 2: 2986.32 11.67 5357.78 1010.87 13034.93 00:09:07.946 PCIE (0000:00:12.0) NSID 2 from core 2: 2986.32 11.67 5357.76 1143.68 12556.01 00:09:07.946 PCIE (0000:00:12.0) NSID 3 from core 2: 2986.32 11.67 5357.80 1061.31 13172.84 00:09:07.946 ======================================================== 00:09:07.946 Total : 17917.93 69.99 5357.53 1010.87 13275.81 00:09:07.946 00:09:07.946 11:40:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75599 00:09:10.492 Initializing NVMe Controllers 00:09:10.492 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.492 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.492 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:10.492 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:10.492 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:10.492 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:10.492 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:10.492 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:10.492 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:10.492 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:10.492 Initialization complete. Launching workers. 00:09:10.492 ======================================================== 00:09:10.492 Latency(us) 00:09:10.492 Device Information : IOPS MiB/s Average min max 00:09:10.492 PCIE (0000:00:10.0) NSID 1 from core 0: 10419.12 40.70 1534.33 472.25 6069.76 00:09:10.492 PCIE (0000:00:11.0) NSID 1 from core 0: 10414.52 40.68 1535.87 261.12 5340.60 00:09:10.492 PCIE (0000:00:13.0) NSID 1 from core 0: 10401.32 40.63 1537.79 219.71 5666.77 00:09:10.492 PCIE (0000:00:12.0) NSID 1 from core 0: 10420.72 40.71 1534.90 496.83 5291.93 00:09:10.492 PCIE (0000:00:12.0) NSID 2 from core 0: 10415.32 40.68 1535.65 401.70 5430.26 00:09:10.492 PCIE (0000:00:12.0) NSID 3 from core 0: 10416.32 40.69 1535.47 469.00 5583.23 00:09:10.492 ======================================================== 00:09:10.492 Total : 62487.34 244.09 1535.67 219.71 6069.76 00:09:10.492 00:09:10.492 11:40:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75600 00:09:10.492 11:40:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75669 00:09:10.492 11:40:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:10.492 11:40:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75670 00:09:10.492 11:40:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:10.492 11:40:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:13.774 Initializing NVMe Controllers 00:09:13.774 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.774 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.774 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.774 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.774 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:13.774 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:13.774 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:13.774 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:13.774 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:13.774 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:13.774 Initialization complete. Launching workers. 00:09:13.774 ======================================================== 00:09:13.774 Latency(us) 00:09:13.774 Device Information : IOPS MiB/s Average min max 00:09:13.774 PCIE (0000:00:10.0) NSID 1 from core 0: 4556.47 17.80 3509.83 743.00 13297.27 00:09:13.774 PCIE (0000:00:11.0) NSID 1 from core 0: 4556.47 17.80 3511.13 762.40 13835.24 00:09:13.774 PCIE (0000:00:13.0) NSID 1 from core 0: 4556.47 17.80 3511.74 760.07 11396.37 00:09:13.774 PCIE (0000:00:12.0) NSID 1 from core 0: 4556.47 17.80 3512.44 768.19 12547.07 00:09:13.774 PCIE (0000:00:12.0) NSID 2 from core 0: 4556.47 17.80 3512.82 770.67 12759.69 00:09:13.774 PCIE (0000:00:12.0) NSID 3 from core 0: 4556.47 17.80 3512.95 766.02 12866.72 00:09:13.774 ======================================================== 00:09:13.774 Total : 27338.84 106.79 3511.82 743.00 13835.24 00:09:13.774 00:09:13.774 Initializing NVMe Controllers 00:09:13.774 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.774 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.774 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.774 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.774 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:13.774 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:13.774 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:13.774 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:13.774 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:13.774 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:13.774 Initialization complete. Launching workers. 00:09:13.774 ======================================================== 00:09:13.774 Latency(us) 00:09:13.774 Device Information : IOPS MiB/s Average min max 00:09:13.774 PCIE (0000:00:10.0) NSID 1 from core 1: 4308.54 16.83 3711.75 1050.87 12373.19 00:09:13.774 PCIE (0000:00:11.0) NSID 1 from core 1: 4308.54 16.83 3713.21 1093.98 12582.94 00:09:13.774 PCIE (0000:00:13.0) NSID 1 from core 1: 4308.54 16.83 3713.10 1140.53 11964.88 00:09:13.774 PCIE (0000:00:12.0) NSID 1 from core 1: 4308.54 16.83 3713.00 1099.01 10787.41 00:09:13.774 PCIE (0000:00:12.0) NSID 2 from core 1: 4308.54 16.83 3712.84 1192.24 11130.00 00:09:13.774 PCIE (0000:00:12.0) NSID 3 from core 1: 4308.21 16.83 3712.98 1165.24 10714.76 00:09:13.774 ======================================================== 00:09:13.774 Total : 25850.93 100.98 3712.82 1050.87 12582.94 00:09:13.774 00:09:15.675 Initializing NVMe Controllers 00:09:15.675 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.675 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.675 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.675 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.675 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:15.675 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:15.675 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:15.675 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:15.675 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:15.675 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:15.675 Initialization complete. Launching workers. 00:09:15.675 ======================================================== 00:09:15.675 Latency(us) 00:09:15.675 Device Information : IOPS MiB/s Average min max 00:09:15.675 PCIE (0000:00:10.0) NSID 1 from core 2: 2165.35 8.46 7386.95 904.45 26182.64 00:09:15.675 PCIE (0000:00:11.0) NSID 1 from core 2: 2165.35 8.46 7389.50 911.11 30615.98 00:09:15.675 PCIE (0000:00:13.0) NSID 1 from core 2: 2165.35 8.46 7389.36 895.64 31671.20 00:09:15.675 PCIE (0000:00:12.0) NSID 1 from core 2: 2165.35 8.46 7389.18 890.34 28943.55 00:09:15.675 PCIE (0000:00:12.0) NSID 2 from core 2: 2165.35 8.46 7389.03 902.72 27531.52 00:09:15.675 PCIE (0000:00:12.0) NSID 3 from core 2: 2165.35 8.46 7392.21 905.14 31027.72 00:09:15.675 ======================================================== 00:09:15.675 Total : 12992.08 50.75 7389.37 890.34 31671.20 00:09:15.675 00:09:15.675 ************************************ 00:09:15.675 END TEST nvme_multi_secondary 00:09:15.675 ************************************ 00:09:15.675 11:40:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75669 00:09:15.675 11:40:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75670 00:09:15.675 00:09:15.675 real 0m10.627s 00:09:15.675 user 0m18.207s 00:09:15.675 sys 0m0.525s 00:09:15.675 11:40:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.675 11:40:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:15.675 11:40:28 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:15.675 11:40:28 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/74642 ]] 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1090 -- # kill 74642 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1091 -- # wait 74642 00:09:15.675 [2024-11-19 11:40:28.724634] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.724737] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.724764] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.724789] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.726052] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.726154] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.726179] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.726207] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.727596] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.727847] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.727877] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.727903] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.728777] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.728834] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.728854] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.728876] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75552) is not found. Dropping the request. 00:09:15.675 [2024-11-19 11:40:28.796785] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:15.675 11:40:28 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.675 11:40:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.675 ************************************ 00:09:15.675 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:15.675 ************************************ 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:15.675 * Looking for test storage... 00:09:15.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.675 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.676 --rc genhtml_branch_coverage=1 00:09:15.676 --rc genhtml_function_coverage=1 00:09:15.676 --rc genhtml_legend=1 00:09:15.676 --rc geninfo_all_blocks=1 00:09:15.676 --rc geninfo_unexecuted_blocks=1 00:09:15.676 00:09:15.676 ' 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.676 --rc genhtml_branch_coverage=1 00:09:15.676 --rc genhtml_function_coverage=1 00:09:15.676 --rc genhtml_legend=1 00:09:15.676 --rc geninfo_all_blocks=1 00:09:15.676 --rc geninfo_unexecuted_blocks=1 00:09:15.676 00:09:15.676 ' 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.676 --rc genhtml_branch_coverage=1 00:09:15.676 --rc genhtml_function_coverage=1 00:09:15.676 --rc genhtml_legend=1 00:09:15.676 --rc geninfo_all_blocks=1 00:09:15.676 --rc geninfo_unexecuted_blocks=1 00:09:15.676 00:09:15.676 ' 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:15.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.676 --rc genhtml_branch_coverage=1 00:09:15.676 --rc genhtml_function_coverage=1 00:09:15.676 --rc genhtml_legend=1 00:09:15.676 --rc geninfo_all_blocks=1 00:09:15.676 --rc geninfo_unexecuted_blocks=1 00:09:15.676 00:09:15.676 ' 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:15.676 11:40:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75835 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75835 00:09:15.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 75835 ']' 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.676 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:15.941 [2024-11-19 11:40:29.097859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:15.941 [2024-11-19 11:40:29.097971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75835 ] 00:09:15.941 [2024-11-19 11:40:29.236144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.941 [2024-11-19 11:40:29.270715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.941 [2024-11-19 11:40:29.271048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.941 [2024-11-19 11:40:29.271150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.941 [2024-11-19 11:40:29.271247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:16.875 nvme0n1 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_qiU9B.txt 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.875 11:40:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:16.875 true 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732016430 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75857 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:16.875 11:40:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:18.776 [2024-11-19 11:40:32.019238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:18.776 [2024-11-19 11:40:32.019498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:18.776 [2024-11-19 11:40:32.019520] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:18.776 [2024-11-19 11:40:32.019535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:18.776 [2024-11-19 11:40:32.022647] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:18.776 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75857 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75857 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75857 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:18.776 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_qiU9B.txt 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_qiU9B.txt 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75835 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 75835 ']' 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 75835 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75835 00:09:18.777 killing process with pid 75835 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75835' 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 75835 00:09:18.777 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 75835 00:09:19.036 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:19.036 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:19.036 00:09:19.036 real 0m3.584s 00:09:19.036 user 0m12.823s 00:09:19.036 sys 0m0.461s 00:09:19.036 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.036 ************************************ 00:09:19.036 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:19.036 ************************************ 00:09:19.036 11:40:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:19.294 11:40:32 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:19.294 11:40:32 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:19.294 11:40:32 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:19.294 11:40:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.294 11:40:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:19.294 ************************************ 00:09:19.294 START TEST nvme_fio 00:09:19.294 ************************************ 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:19.294 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:19.294 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:19.555 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:19.555 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:19.555 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:19.555 11:40:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:19.555 11:40:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:19.816 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:19.816 fio-3.35 00:09:19.816 Starting 1 thread 00:09:26.394 00:09:26.394 test: (groupid=0, jobs=1): err= 0: pid=75984: Tue Nov 19 11:40:39 2024 00:09:26.394 read: IOPS=21.2k, BW=82.7MiB/s (86.7MB/s)(165MiB/2001msec) 00:09:26.394 slat (nsec): min=4205, max=73824, avg=5235.39, stdev=2362.75 00:09:26.394 clat (usec): min=366, max=9528, avg=3014.95, stdev=929.53 00:09:26.394 lat (usec): min=374, max=9564, avg=3020.19, stdev=930.87 00:09:26.394 clat percentiles (usec): 00:09:26.394 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2507], 00:09:26.394 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:26.394 | 70.00th=[ 2900], 80.00th=[ 3130], 90.00th=[ 4228], 95.00th=[ 5276], 00:09:26.394 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7570], 99.95th=[ 7767], 00:09:26.394 | 99.99th=[ 9110] 00:09:26.394 bw ( KiB/s): min=75816, max=90784, per=100.00%, avg=84773.33, stdev=7907.11, samples=3 00:09:26.394 iops : min=18954, max=22696, avg=21194.00, stdev=1977.15, samples=3 00:09:26.394 write: IOPS=21.0k, BW=82.2MiB/s (86.1MB/s)(164MiB/2001msec); 0 zone resets 00:09:26.394 slat (nsec): min=4263, max=74752, avg=5422.13, stdev=2412.38 00:09:26.394 clat (usec): min=435, max=9191, avg=3032.66, stdev=928.03 00:09:26.394 lat (usec): min=443, max=9203, avg=3038.08, stdev=929.37 00:09:26.394 clat percentiles (usec): 00:09:26.394 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:09:26.394 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802], 00:09:26.394 | 70.00th=[ 2933], 80.00th=[ 3163], 90.00th=[ 4228], 95.00th=[ 5276], 00:09:26.394 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[ 7767], 00:09:26.394 | 99.99th=[ 8848] 00:09:26.394 bw ( KiB/s): min=75720, max=90928, per=100.00%, avg=84882.67, stdev=8069.02, samples=3 00:09:26.394 iops : min=18930, max=22732, avg=21220.67, stdev=2017.26, samples=3 00:09:26.394 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:26.394 lat (msec) : 2=0.72%, 4=87.93%, 10=11.32% 00:09:26.394 cpu : usr=99.10%, sys=0.10%, ctx=2, majf=0, minf=625 00:09:26.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:26.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.394 issued rwts: total=42357,42085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.395 00:09:26.395 Run status group 0 (all jobs): 00:09:26.395 READ: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:26.395 WRITE: bw=82.2MiB/s (86.1MB/s), 82.2MiB/s-82.2MiB/s (86.1MB/s-86.1MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:26.395 ----------------------------------------------------- 00:09:26.395 Suppressions used: 00:09:26.395 count bytes template 00:09:26.395 1 32 /usr/src/fio/parse.c 00:09:26.395 1 8 libtcmalloc_minimal.so 00:09:26.395 ----------------------------------------------------- 00:09:26.395 00:09:26.395 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:26.395 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:26.395 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.395 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:26.395 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.395 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:26.655 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.655 11:40:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.655 11:40:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:26.914 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.914 fio-3.35 00:09:26.914 Starting 1 thread 00:09:33.477 00:09:33.477 test: (groupid=0, jobs=1): err= 0: pid=76035: Tue Nov 19 11:40:45 2024 00:09:33.477 read: IOPS=19.6k, BW=76.6MiB/s (80.4MB/s)(153MiB/2001msec) 00:09:33.477 slat (nsec): min=4227, max=81998, avg=5295.50, stdev=2545.36 00:09:33.477 clat (usec): min=246, max=12559, avg=3245.47, stdev=964.05 00:09:33.477 lat (usec): min=250, max=12617, avg=3250.76, stdev=965.12 00:09:33.477 clat percentiles (usec): 00:09:33.477 | 1.00th=[ 1909], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2606], 00:09:33.477 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3064], 00:09:33.477 | 70.00th=[ 3294], 80.00th=[ 3818], 90.00th=[ 4686], 95.00th=[ 5276], 00:09:33.477 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7832], 99.95th=[ 8979], 00:09:33.477 | 99.99th=[12125] 00:09:33.477 bw ( KiB/s): min=70528, max=82056, per=98.87%, avg=77600.00, stdev=6193.25, samples=3 00:09:33.477 iops : min=17632, max=20514, avg=19400.00, stdev=1548.31, samples=3 00:09:33.477 write: IOPS=19.6k, BW=76.5MiB/s (80.2MB/s)(153MiB/2001msec); 0 zone resets 00:09:33.477 slat (nsec): min=4276, max=57705, avg=5519.79, stdev=2591.92 00:09:33.477 clat (usec): min=237, max=12108, avg=3263.46, stdev=970.22 00:09:33.477 lat (usec): min=242, max=12120, avg=3268.98, stdev=971.29 00:09:33.477 clat percentiles (usec): 00:09:33.477 | 1.00th=[ 1926], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2606], 00:09:33.477 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3064], 00:09:33.477 | 70.00th=[ 3326], 80.00th=[ 3851], 90.00th=[ 4686], 95.00th=[ 5276], 00:09:33.477 | 99.00th=[ 6390], 99.50th=[ 7046], 99.90th=[ 7963], 99.95th=[10159], 00:09:33.477 | 99.99th=[11994] 00:09:33.477 bw ( KiB/s): min=70544, max=82168, per=99.33%, avg=77800.00, stdev=6327.30, samples=3 00:09:33.477 iops : min=17636, max=20542, avg=19450.00, stdev=1581.83, samples=3 00:09:33.477 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:09:33.477 lat (msec) : 2=1.17%, 4=81.11%, 10=17.62%, 20=0.05% 00:09:33.477 cpu : usr=98.90%, sys=0.15%, ctx=3, majf=0, minf=625 00:09:33.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:33.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.477 issued rwts: total=39263,39183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.477 00:09:33.477 Run status group 0 (all jobs): 00:09:33.477 READ: bw=76.6MiB/s (80.4MB/s), 76.6MiB/s-76.6MiB/s (80.4MB/s-80.4MB/s), io=153MiB (161MB), run=2001-2001msec 00:09:33.477 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=153MiB (160MB), run=2001-2001msec 00:09:33.477 ----------------------------------------------------- 00:09:33.477 Suppressions used: 00:09:33.477 count bytes template 00:09:33.477 1 32 /usr/src/fio/parse.c 00:09:33.477 1 8 libtcmalloc_minimal.so 00:09:33.477 ----------------------------------------------------- 00:09:33.477 00:09:33.477 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:33.477 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:33.477 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:33.477 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:33.477 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:33.478 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:33.478 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:33.478 11:40:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:33.478 11:40:46 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:33.478 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:33.478 fio-3.35 00:09:33.478 Starting 1 thread 00:09:40.051 00:09:40.051 test: (groupid=0, jobs=1): err= 0: pid=76091: Tue Nov 19 11:40:52 2024 00:09:40.051 read: IOPS=20.1k, BW=78.5MiB/s (82.4MB/s)(157MiB/2001msec) 00:09:40.051 slat (nsec): min=4222, max=71094, avg=5325.31, stdev=2548.31 00:09:40.051 clat (usec): min=222, max=11072, avg=3167.62, stdev=943.33 00:09:40.051 lat (usec): min=227, max=11119, avg=3172.94, stdev=944.53 00:09:40.051 clat percentiles (usec): 00:09:40.051 | 1.00th=[ 2212], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:40.051 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2933], 00:09:40.051 | 70.00th=[ 3130], 80.00th=[ 3556], 90.00th=[ 4621], 95.00th=[ 5276], 00:09:40.051 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7963], 99.95th=[ 8455], 00:09:40.051 | 99.99th=[10945] 00:09:40.051 bw ( KiB/s): min=80896, max=82856, per=100.00%, avg=81568.00, stdev=1115.79, samples=3 00:09:40.051 iops : min=20224, max=20714, avg=20392.00, stdev=278.95, samples=3 00:09:40.051 write: IOPS=20.1k, BW=78.3MiB/s (82.1MB/s)(157MiB/2001msec); 0 zone resets 00:09:40.051 slat (nsec): min=4293, max=82893, avg=5469.76, stdev=2552.03 00:09:40.051 clat (usec): min=231, max=10949, avg=3185.14, stdev=944.13 00:09:40.052 lat (usec): min=235, max=10964, avg=3190.61, stdev=945.36 00:09:40.052 clat percentiles (usec): 00:09:40.052 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:40.052 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2966], 00:09:40.052 | 70.00th=[ 3130], 80.00th=[ 3556], 90.00th=[ 4621], 95.00th=[ 5342], 00:09:40.052 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 8029], 99.95th=[ 9110], 00:09:40.052 | 99.99th=[10683] 00:09:40.052 bw ( KiB/s): min=80824, max=83008, per=100.00%, avg=81706.67, stdev=1150.62, samples=3 00:09:40.052 iops : min=20206, max=20752, avg=20426.67, stdev=287.65, samples=3 00:09:40.052 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:40.052 lat (msec) : 2=0.44%, 4=84.49%, 10=14.99%, 20=0.03% 00:09:40.052 cpu : usr=98.80%, sys=0.35%, ctx=31, majf=0, minf=625 00:09:40.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:40.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:40.052 issued rwts: total=40238,40125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.052 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:40.052 00:09:40.052 Run status group 0 (all jobs): 00:09:40.052 READ: bw=78.5MiB/s (82.4MB/s), 78.5MiB/s-78.5MiB/s (82.4MB/s-82.4MB/s), io=157MiB (165MB), run=2001-2001msec 00:09:40.052 WRITE: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=157MiB (164MB), run=2001-2001msec 00:09:40.052 ----------------------------------------------------- 00:09:40.052 Suppressions used: 00:09:40.052 count bytes template 00:09:40.052 1 32 /usr/src/fio/parse.c 00:09:40.052 1 8 libtcmalloc_minimal.so 00:09:40.052 ----------------------------------------------------- 00:09:40.052 00:09:40.052 11:40:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:40.052 11:40:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:40.052 11:40:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:40.052 11:40:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:40.052 11:40:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:40.052 11:40:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:40.052 11:40:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:40.052 11:40:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:40.052 11:40:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:40.312 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:40.312 fio-3.35 00:09:40.312 Starting 1 thread 00:09:45.603 00:09:45.603 test: (groupid=0, jobs=1): err= 0: pid=76146: Tue Nov 19 11:40:58 2024 00:09:45.603 read: IOPS=19.6k, BW=76.5MiB/s (80.2MB/s)(153MiB/2001msec) 00:09:45.603 slat (nsec): min=4263, max=76742, avg=5475.64, stdev=2690.56 00:09:45.603 clat (usec): min=332, max=11747, avg=3255.40, stdev=1029.23 00:09:45.603 lat (usec): min=337, max=11804, avg=3260.88, stdev=1030.41 00:09:45.603 clat percentiles (usec): 00:09:45.603 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2573], 00:09:45.603 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2966], 00:09:45.603 | 70.00th=[ 3326], 80.00th=[ 3982], 90.00th=[ 4883], 95.00th=[ 5473], 00:09:45.603 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7963], 99.95th=[10421], 00:09:45.603 | 99.99th=[11600] 00:09:45.603 bw ( KiB/s): min=73264, max=84598, per=99.93%, avg=78247.33, stdev=5789.39, samples=3 00:09:45.603 iops : min=18316, max=21149, avg=19561.67, stdev=1447.07, samples=3 00:09:45.603 write: IOPS=19.5k, BW=76.3MiB/s (80.1MB/s)(153MiB/2001msec); 0 zone resets 00:09:45.603 slat (nsec): min=4324, max=88672, avg=5581.47, stdev=2658.54 00:09:45.603 clat (usec): min=205, max=11684, avg=3268.16, stdev=1020.53 00:09:45.603 lat (usec): min=210, max=11695, avg=3273.74, stdev=1021.67 00:09:45.603 clat percentiles (usec): 00:09:45.603 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:45.603 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2999], 00:09:45.603 | 70.00th=[ 3359], 80.00th=[ 3982], 90.00th=[ 4883], 95.00th=[ 5407], 00:09:45.603 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 8848], 99.95th=[10421], 00:09:45.603 | 99.99th=[11600] 00:09:45.603 bw ( KiB/s): min=73288, max=84774, per=100.00%, avg=78343.33, stdev=5865.21, samples=3 00:09:45.603 iops : min=18322, max=21193, avg=19585.67, stdev=1466.03, samples=3 00:09:45.603 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:09:45.603 lat (msec) : 2=0.64%, 4=79.55%, 10=19.69%, 20=0.07% 00:09:45.603 cpu : usr=99.00%, sys=0.05%, ctx=3, majf=0, minf=625 00:09:45.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:45.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.604 issued rwts: total=39170,39107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.604 00:09:45.604 Run status group 0 (all jobs): 00:09:45.604 READ: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=153MiB (160MB), run=2001-2001msec 00:09:45.604 WRITE: bw=76.3MiB/s (80.1MB/s), 76.3MiB/s-76.3MiB/s (80.1MB/s-80.1MB/s), io=153MiB (160MB), run=2001-2001msec 00:09:45.604 ----------------------------------------------------- 00:09:45.604 Suppressions used: 00:09:45.604 count bytes template 00:09:45.604 1 32 /usr/src/fio/parse.c 00:09:45.604 1 8 libtcmalloc_minimal.so 00:09:45.604 ----------------------------------------------------- 00:09:45.604 00:09:45.868 ************************************ 00:09:45.868 END TEST nvme_fio 00:09:45.868 ************************************ 00:09:45.868 11:40:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:45.868 11:40:59 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:45.868 00:09:45.868 real 0m26.542s 00:09:45.868 user 0m19.530s 00:09:45.868 sys 0m10.628s 00:09:45.868 11:40:59 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.869 11:40:59 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:45.869 ************************************ 00:09:45.869 END TEST nvme 00:09:45.869 ************************************ 00:09:45.869 00:09:45.869 real 1m33.366s 00:09:45.869 user 3m33.218s 00:09:45.869 sys 0m20.319s 00:09:45.869 11:40:59 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.869 11:40:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.869 11:40:59 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:45.869 11:40:59 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:45.869 11:40:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.869 11:40:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.869 11:40:59 -- common/autotest_common.sh@10 -- # set +x 00:09:45.869 ************************************ 00:09:45.869 START TEST nvme_scc 00:09:45.869 ************************************ 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:45.869 * Looking for test storage... 00:09:45.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.869 11:40:59 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.869 11:40:59 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:45.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.869 --rc genhtml_branch_coverage=1 00:09:45.869 --rc genhtml_function_coverage=1 00:09:45.869 --rc genhtml_legend=1 00:09:45.869 --rc geninfo_all_blocks=1 00:09:45.869 --rc geninfo_unexecuted_blocks=1 00:09:45.870 00:09:45.870 ' 00:09:45.870 11:40:59 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:45.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.870 --rc genhtml_branch_coverage=1 00:09:45.870 --rc genhtml_function_coverage=1 00:09:45.870 --rc genhtml_legend=1 00:09:45.870 --rc geninfo_all_blocks=1 00:09:45.870 --rc geninfo_unexecuted_blocks=1 00:09:45.870 00:09:45.870 ' 00:09:45.870 11:40:59 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:45.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.870 --rc genhtml_branch_coverage=1 00:09:45.870 --rc genhtml_function_coverage=1 00:09:45.870 --rc genhtml_legend=1 00:09:45.870 --rc geninfo_all_blocks=1 00:09:45.870 --rc geninfo_unexecuted_blocks=1 00:09:45.870 00:09:45.870 ' 00:09:45.870 11:40:59 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:45.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.870 --rc genhtml_branch_coverage=1 00:09:45.870 --rc genhtml_function_coverage=1 00:09:45.870 --rc genhtml_legend=1 00:09:45.870 --rc geninfo_all_blocks=1 00:09:45.870 --rc geninfo_unexecuted_blocks=1 00:09:45.870 00:09:45.870 ' 00:09:45.870 11:40:59 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:45.870 11:40:59 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:45.870 11:40:59 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.133 11:40:59 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.133 11:40:59 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.133 11:40:59 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.133 11:40:59 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.133 11:40:59 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.133 11:40:59 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.133 11:40:59 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.133 11:40:59 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:46.133 11:40:59 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:46.133 11:40:59 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:46.133 11:40:59 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.133 11:40:59 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:46.133 11:40:59 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:46.133 11:40:59 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:46.133 11:40:59 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:46.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.394 Waiting for block devices as requested 00:09:46.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.655 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.655 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.655 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.982 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:51.982 11:41:05 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:51.982 11:41:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:51.982 11:41:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:51.982 11:41:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.982 11:41:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.982 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.983 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:51.984 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.985 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:51.986 11:41:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:51.986 11:41:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:51.986 11:41:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:51.987 11:41:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.987 11:41:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.987 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:51.988 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:51.989 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.990 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:51.991 11:41:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:51.991 11:41:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:51.991 11:41:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.991 11:41:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:51.991 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:51.992 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.993 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.994 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.995 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:51.996 11:41:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:52.272 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.273 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.274 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:52.275 11:41:05 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:52.275 11:41:05 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:52.275 11:41:05 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.275 11:41:05 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:52.275 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.276 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.277 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:52.278 11:41:05 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:52.278 11:41:05 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:52.279 11:41:05 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:52.279 11:41:05 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:52.279 11:41:05 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.114 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.114 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.376 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.376 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.376 11:41:06 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:53.376 11:41:06 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:53.376 11:41:06 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.376 11:41:06 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:53.376 ************************************ 00:09:53.376 START TEST nvme_simple_copy 00:09:53.376 ************************************ 00:09:53.376 11:41:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:53.637 Initializing NVMe Controllers 00:09:53.637 Attaching to 0000:00:10.0 00:09:53.637 Controller supports SCC. Attached to 0000:00:10.0 00:09:53.637 Namespace ID: 1 size: 6GB 00:09:53.637 Initialization complete. 00:09:53.637 00:09:53.637 Controller QEMU NVMe Ctrl (12340 ) 00:09:53.637 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:53.637 Namespace Block Size:4096 00:09:53.637 Writing LBAs 0 to 63 with Random Data 00:09:53.637 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:53.637 LBAs matching Written Data: 64 00:09:53.637 00:09:53.637 real 0m0.242s 00:09:53.637 user 0m0.087s 00:09:53.637 sys 0m0.054s 00:09:53.637 11:41:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.637 ************************************ 00:09:53.637 END TEST nvme_simple_copy 00:09:53.637 ************************************ 00:09:53.637 11:41:06 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:53.637 ************************************ 00:09:53.637 END TEST nvme_scc 00:09:53.637 ************************************ 00:09:53.637 00:09:53.637 real 0m7.839s 00:09:53.637 user 0m1.055s 00:09:53.637 sys 0m1.445s 00:09:53.637 11:41:06 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.637 11:41:06 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:53.637 11:41:06 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:53.637 11:41:07 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:53.637 11:41:07 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:53.637 11:41:07 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:53.637 11:41:07 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:53.637 11:41:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:53.637 11:41:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.637 11:41:07 -- common/autotest_common.sh@10 -- # set +x 00:09:53.637 ************************************ 00:09:53.637 START TEST nvme_fdp 00:09:53.637 ************************************ 00:09:53.637 11:41:07 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:53.899 * Looking for test storage... 00:09:53.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.899 11:41:07 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.899 --rc genhtml_branch_coverage=1 00:09:53.899 --rc genhtml_function_coverage=1 00:09:53.899 --rc genhtml_legend=1 00:09:53.899 --rc geninfo_all_blocks=1 00:09:53.899 --rc geninfo_unexecuted_blocks=1 00:09:53.899 00:09:53.899 ' 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.899 --rc genhtml_branch_coverage=1 00:09:53.899 --rc genhtml_function_coverage=1 00:09:53.899 --rc genhtml_legend=1 00:09:53.899 --rc geninfo_all_blocks=1 00:09:53.899 --rc geninfo_unexecuted_blocks=1 00:09:53.899 00:09:53.899 ' 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.899 --rc genhtml_branch_coverage=1 00:09:53.899 --rc genhtml_function_coverage=1 00:09:53.899 --rc genhtml_legend=1 00:09:53.899 --rc geninfo_all_blocks=1 00:09:53.899 --rc geninfo_unexecuted_blocks=1 00:09:53.899 00:09:53.899 ' 00:09:53.899 11:41:07 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:53.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.899 --rc genhtml_branch_coverage=1 00:09:53.899 --rc genhtml_function_coverage=1 00:09:53.899 --rc genhtml_legend=1 00:09:53.899 --rc geninfo_all_blocks=1 00:09:53.899 --rc geninfo_unexecuted_blocks=1 00:09:53.899 00:09:53.899 ' 00:09:53.899 11:41:07 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:53.899 11:41:07 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:53.899 11:41:07 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.900 11:41:07 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.900 11:41:07 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.900 11:41:07 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.900 11:41:07 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.900 11:41:07 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.900 11:41:07 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.900 11:41:07 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.900 11:41:07 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:53.900 11:41:07 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:53.900 11:41:07 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:53.900 11:41:07 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.900 11:41:07 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:54.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.422 Waiting for block devices as requested 00:09:54.422 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.422 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.422 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.683 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.979 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:59.979 11:41:12 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:59.979 11:41:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:59.979 11:41:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:59.979 11:41:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.979 11:41:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.979 11:41:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:59.979 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.980 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:59.981 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.982 11:41:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.983 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.984 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:59.985 11:41:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:59.985 11:41:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:59.985 11:41:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.985 11:41:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:59.985 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:59.986 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.987 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:59.988 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:59.989 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:59.990 11:41:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:59.990 11:41:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:59.991 11:41:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:59.991 11:41:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.991 11:41:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.991 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:59.992 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:59.993 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:59.994 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:59.995 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:59.996 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:59.997 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.998 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:59.999 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:00.000 11:41:13 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.000 11:41:13 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:00.000 11:41:13 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.000 11:41:13 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.000 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.001 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.002 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.003 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:00.004 11:41:13 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:00.004 11:41:13 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:00.004 11:41:13 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:00.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.150 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.150 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.150 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.150 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.150 11:41:14 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:01.150 11:41:14 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:01.150 11:41:14 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.150 11:41:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:01.150 ************************************ 00:10:01.150 START TEST nvme_flexible_data_placement 00:10:01.150 ************************************ 00:10:01.150 11:41:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:01.412 Initializing NVMe Controllers 00:10:01.412 Attaching to 0000:00:13.0 00:10:01.412 Controller supports FDP Attached to 0000:00:13.0 00:10:01.412 Namespace ID: 1 Endurance Group ID: 1 00:10:01.412 Initialization complete. 00:10:01.412 00:10:01.412 ================================== 00:10:01.412 == FDP tests for Namespace: #01 == 00:10:01.412 ================================== 00:10:01.412 00:10:01.412 Get Feature: FDP: 00:10:01.412 ================= 00:10:01.412 Enabled: Yes 00:10:01.412 FDP configuration Index: 0 00:10:01.412 00:10:01.412 FDP configurations log page 00:10:01.412 =========================== 00:10:01.412 Number of FDP configurations: 1 00:10:01.412 Version: 0 00:10:01.412 Size: 112 00:10:01.412 FDP Configuration Descriptor: 0 00:10:01.412 Descriptor Size: 96 00:10:01.412 Reclaim Group Identifier format: 2 00:10:01.412 FDP Volatile Write Cache: Not Present 00:10:01.412 FDP Configuration: Valid 00:10:01.412 Vendor Specific Size: 0 00:10:01.412 Number of Reclaim Groups: 2 00:10:01.412 Number of Recalim Unit Handles: 8 00:10:01.412 Max Placement Identifiers: 128 00:10:01.412 Number of Namespaces Suppprted: 256 00:10:01.412 Reclaim unit Nominal Size: 6000000 bytes 00:10:01.412 Estimated Reclaim Unit Time Limit: Not Reported 00:10:01.412 RUH Desc #000: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #001: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #002: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #003: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #004: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #005: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #006: RUH Type: Initially Isolated 00:10:01.412 RUH Desc #007: RUH Type: Initially Isolated 00:10:01.412 00:10:01.412 FDP reclaim unit handle usage log page 00:10:01.412 ====================================== 00:10:01.412 Number of Reclaim Unit Handles: 8 00:10:01.412 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:01.412 RUH Usage Desc #001: RUH Attributes: Unused 00:10:01.412 RUH Usage Desc #002: RUH Attributes: Unused 00:10:01.412 RUH Usage Desc #003: RUH Attributes: Unused 00:10:01.412 RUH Usage Desc #004: RUH Attributes: Unused 00:10:01.412 RUH Usage Desc #005: RUH Attributes: Unused 00:10:01.412 RUH Usage Desc #006: RUH Attributes: Unused 00:10:01.412 RUH Usage Desc #007: RUH Attributes: Unused 00:10:01.412 00:10:01.412 FDP statistics log page 00:10:01.412 ======================= 00:10:01.412 Host bytes with metadata written: 1965023232 00:10:01.412 Media bytes with metadata written: 1965318144 00:10:01.412 Media bytes erased: 0 00:10:01.412 00:10:01.412 FDP Reclaim unit handle status 00:10:01.412 ============================== 00:10:01.412 Number of RUHS descriptors: 2 00:10:01.412 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002e02 00:10:01.412 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:01.412 00:10:01.412 FDP write on placement id: 0 success 00:10:01.412 00:10:01.412 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:01.412 00:10:01.412 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:01.412 00:10:01.412 Get Feature: FDP Events for Placement handle: #0 00:10:01.412 ======================== 00:10:01.412 Number of FDP Events: 6 00:10:01.412 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:01.412 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:01.412 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:01.412 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:01.412 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:01.412 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:01.412 00:10:01.412 FDP events log page 00:10:01.412 =================== 00:10:01.412 Number of FDP events: 1 00:10:01.412 FDP Event #0: 00:10:01.412 Event Type: RU Not Written to Capacity 00:10:01.412 Placement Identifier: Valid 00:10:01.412 NSID: Valid 00:10:01.412 Location: Valid 00:10:01.412 Placement Identifier: 0 00:10:01.412 Event Timestamp: 5 00:10:01.412 Namespace Identifier: 1 00:10:01.412 Reclaim Group Identifier: 0 00:10:01.412 Reclaim Unit Handle Identifier: 0 00:10:01.412 00:10:01.412 FDP test passed 00:10:01.412 00:10:01.412 real 0m0.210s 00:10:01.412 user 0m0.054s 00:10:01.412 sys 0m0.053s 00:10:01.412 ************************************ 00:10:01.412 11:41:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.412 11:41:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:01.412 END TEST nvme_flexible_data_placement 00:10:01.412 ************************************ 00:10:01.412 00:10:01.412 real 0m7.691s 00:10:01.412 user 0m0.986s 00:10:01.412 sys 0m1.439s 00:10:01.412 11:41:14 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.412 ************************************ 00:10:01.412 END TEST nvme_fdp 00:10:01.412 ************************************ 00:10:01.412 11:41:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:01.412 11:41:14 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:01.412 11:41:14 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:01.412 11:41:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:01.412 11:41:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.413 11:41:14 -- common/autotest_common.sh@10 -- # set +x 00:10:01.413 ************************************ 00:10:01.413 START TEST nvme_rpc 00:10:01.413 ************************************ 00:10:01.413 11:41:14 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:01.675 * Looking for test storage... 00:10:01.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.675 11:41:14 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.675 --rc genhtml_branch_coverage=1 00:10:01.675 --rc genhtml_function_coverage=1 00:10:01.675 --rc genhtml_legend=1 00:10:01.675 --rc geninfo_all_blocks=1 00:10:01.675 --rc geninfo_unexecuted_blocks=1 00:10:01.675 00:10:01.675 ' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.675 --rc genhtml_branch_coverage=1 00:10:01.675 --rc genhtml_function_coverage=1 00:10:01.675 --rc genhtml_legend=1 00:10:01.675 --rc geninfo_all_blocks=1 00:10:01.675 --rc geninfo_unexecuted_blocks=1 00:10:01.675 00:10:01.675 ' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.675 --rc genhtml_branch_coverage=1 00:10:01.675 --rc genhtml_function_coverage=1 00:10:01.675 --rc genhtml_legend=1 00:10:01.675 --rc geninfo_all_blocks=1 00:10:01.675 --rc geninfo_unexecuted_blocks=1 00:10:01.675 00:10:01.675 ' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.675 --rc genhtml_branch_coverage=1 00:10:01.675 --rc genhtml_function_coverage=1 00:10:01.675 --rc genhtml_legend=1 00:10:01.675 --rc geninfo_all_blocks=1 00:10:01.675 --rc geninfo_unexecuted_blocks=1 00:10:01.675 00:10:01.675 ' 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77502 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:01.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77502 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77502 ']' 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.675 11:41:14 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.675 11:41:14 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:01.676 11:41:14 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.676 11:41:14 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.676 11:41:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.676 [2024-11-19 11:41:15.069774] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:01.676 [2024-11-19 11:41:15.070386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77502 ] 00:10:01.937 [2024-11-19 11:41:15.207018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.937 [2024-11-19 11:41:15.259402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.937 [2024-11-19 11:41:15.259472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.877 11:41:15 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.877 11:41:15 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:02.877 11:41:15 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:02.877 Nvme0n1 00:10:02.877 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:02.877 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:03.138 request: 00:10:03.138 { 00:10:03.138 "bdev_name": "Nvme0n1", 00:10:03.138 "filename": "non_existing_file", 00:10:03.138 "method": "bdev_nvme_apply_firmware", 00:10:03.138 "req_id": 1 00:10:03.138 } 00:10:03.138 Got JSON-RPC error response 00:10:03.138 response: 00:10:03.138 { 00:10:03.138 "code": -32603, 00:10:03.138 "message": "open file failed." 00:10:03.138 } 00:10:03.138 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:03.138 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:03.138 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:03.399 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:03.399 11:41:16 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77502 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77502 ']' 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77502 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77502 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.399 killing process with pid 77502 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77502' 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77502 00:10:03.399 11:41:16 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77502 00:10:03.660 00:10:03.660 real 0m2.103s 00:10:03.660 user 0m3.969s 00:10:03.660 sys 0m0.549s 00:10:03.660 11:41:16 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.660 ************************************ 00:10:03.660 END TEST nvme_rpc 00:10:03.660 ************************************ 00:10:03.660 11:41:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.660 11:41:16 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:03.660 11:41:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:03.660 11:41:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.660 11:41:16 -- common/autotest_common.sh@10 -- # set +x 00:10:03.660 ************************************ 00:10:03.660 START TEST nvme_rpc_timeouts 00:10:03.660 ************************************ 00:10:03.660 11:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:03.660 * Looking for test storage... 00:10:03.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:03.660 11:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:03.660 11:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:03.660 11:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:03.660 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:03.660 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.921 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:03.921 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.921 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.921 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.921 11:41:17 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.921 --rc genhtml_branch_coverage=1 00:10:03.921 --rc genhtml_function_coverage=1 00:10:03.921 --rc genhtml_legend=1 00:10:03.921 --rc geninfo_all_blocks=1 00:10:03.921 --rc geninfo_unexecuted_blocks=1 00:10:03.921 00:10:03.921 ' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.921 --rc genhtml_branch_coverage=1 00:10:03.921 --rc genhtml_function_coverage=1 00:10:03.921 --rc genhtml_legend=1 00:10:03.921 --rc geninfo_all_blocks=1 00:10:03.921 --rc geninfo_unexecuted_blocks=1 00:10:03.921 00:10:03.921 ' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.921 --rc genhtml_branch_coverage=1 00:10:03.921 --rc genhtml_function_coverage=1 00:10:03.921 --rc genhtml_legend=1 00:10:03.921 --rc geninfo_all_blocks=1 00:10:03.921 --rc geninfo_unexecuted_blocks=1 00:10:03.921 00:10:03.921 ' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:03.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.921 --rc genhtml_branch_coverage=1 00:10:03.921 --rc genhtml_function_coverage=1 00:10:03.921 --rc genhtml_legend=1 00:10:03.921 --rc geninfo_all_blocks=1 00:10:03.921 --rc geninfo_unexecuted_blocks=1 00:10:03.921 00:10:03.921 ' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77556 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77556 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77588 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77588 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77588 ']' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.921 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:03.921 11:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:03.921 [2024-11-19 11:41:17.151706] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:03.921 [2024-11-19 11:41:17.151850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77588 ] 00:10:03.921 [2024-11-19 11:41:17.288722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.180 [2024-11-19 11:41:17.340138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.180 [2024-11-19 11:41:17.340200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.752 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.752 11:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:04.752 Checking default timeout settings: 00:10:04.752 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:04.752 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:05.011 Making settings changes with rpc: 00:10:05.011 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:05.011 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:05.287 Check default vs. modified settings: 00:10:05.287 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:05.287 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77556 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77556 00:10:05.566 Setting action_on_timeout is changed as expected. 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77556 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77556 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:05.566 Setting timeout_us is changed as expected. 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77556 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77556 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:05.566 Setting timeout_admin_us is changed as expected. 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77556 /tmp/settings_modified_77556 00:10:05.566 11:41:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77588 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77588 ']' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77588 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77588 00:10:05.566 killing process with pid 77588 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77588' 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77588 00:10:05.566 11:41:18 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77588 00:10:05.826 RPC TIMEOUT SETTING TEST PASSED. 00:10:05.826 11:41:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:05.826 00:10:05.826 real 0m2.257s 00:10:05.826 user 0m4.447s 00:10:05.826 sys 0m0.540s 00:10:05.826 11:41:19 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.826 ************************************ 00:10:05.826 END TEST nvme_rpc_timeouts 00:10:05.826 ************************************ 00:10:05.826 11:41:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:05.826 11:41:19 -- spdk/autotest.sh@239 -- # uname -s 00:10:06.083 11:41:19 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:06.083 11:41:19 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:06.083 11:41:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.083 11:41:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.083 11:41:19 -- common/autotest_common.sh@10 -- # set +x 00:10:06.083 ************************************ 00:10:06.083 START TEST sw_hotplug 00:10:06.083 ************************************ 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:06.083 * Looking for test storage... 00:10:06.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.083 11:41:19 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.083 --rc genhtml_branch_coverage=1 00:10:06.083 --rc genhtml_function_coverage=1 00:10:06.083 --rc genhtml_legend=1 00:10:06.083 --rc geninfo_all_blocks=1 00:10:06.083 --rc geninfo_unexecuted_blocks=1 00:10:06.083 00:10:06.083 ' 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.083 --rc genhtml_branch_coverage=1 00:10:06.083 --rc genhtml_function_coverage=1 00:10:06.083 --rc genhtml_legend=1 00:10:06.083 --rc geninfo_all_blocks=1 00:10:06.083 --rc geninfo_unexecuted_blocks=1 00:10:06.083 00:10:06.083 ' 00:10:06.083 11:41:19 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.083 --rc genhtml_branch_coverage=1 00:10:06.083 --rc genhtml_function_coverage=1 00:10:06.083 --rc genhtml_legend=1 00:10:06.083 --rc geninfo_all_blocks=1 00:10:06.084 --rc geninfo_unexecuted_blocks=1 00:10:06.084 00:10:06.084 ' 00:10:06.084 11:41:19 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.084 --rc genhtml_branch_coverage=1 00:10:06.084 --rc genhtml_function_coverage=1 00:10:06.084 --rc genhtml_legend=1 00:10:06.084 --rc geninfo_all_blocks=1 00:10:06.084 --rc geninfo_unexecuted_blocks=1 00:10:06.084 00:10:06.084 ' 00:10:06.084 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:06.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.599 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:06.599 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:06.599 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:06.599 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:06.599 11:41:19 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:06.599 11:41:19 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:06.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.115 Waiting for block devices as requested 00:10:07.115 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.115 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.115 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.374 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.660 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:12.660 11:41:25 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:12.660 11:41:25 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:12.921 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:12.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.921 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:13.183 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:13.444 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.444 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:13.444 11:41:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78434 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:13.444 11:41:26 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:13.444 11:41:26 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:13.444 11:41:26 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:13.444 11:41:26 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:13.444 11:41:26 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:13.444 11:41:26 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:13.704 Initializing NVMe Controllers 00:10:13.704 Attaching to 0000:00:10.0 00:10:13.704 Attaching to 0000:00:11.0 00:10:13.704 Attached to 0000:00:11.0 00:10:13.704 Attached to 0000:00:10.0 00:10:13.704 Initialization complete. Starting I/O... 00:10:13.704 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:13.704 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:13.704 00:10:14.648 QEMU NVMe Ctrl (12341 ): 2720 I/Os completed (+2720) 00:10:14.648 QEMU NVMe Ctrl (12340 ): 2730 I/Os completed (+2730) 00:10:14.648 00:10:16.034 QEMU NVMe Ctrl (12341 ): 6124 I/Os completed (+3404) 00:10:16.034 QEMU NVMe Ctrl (12340 ): 6137 I/Os completed (+3407) 00:10:16.034 00:10:16.975 QEMU NVMe Ctrl (12341 ): 9780 I/Os completed (+3656) 00:10:16.975 QEMU NVMe Ctrl (12340 ): 9808 I/Os completed (+3671) 00:10:16.975 00:10:17.919 QEMU NVMe Ctrl (12341 ): 14089 I/Os completed (+4309) 00:10:17.919 QEMU NVMe Ctrl (12340 ): 14103 I/Os completed (+4295) 00:10:17.919 00:10:18.862 QEMU NVMe Ctrl (12341 ): 18648 I/Os completed (+4559) 00:10:18.862 QEMU NVMe Ctrl (12340 ): 18654 I/Os completed (+4551) 00:10:18.862 00:10:19.803 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.804 [2024-11-19 11:41:32.850989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:19.804 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:19.804 [2024-11-19 11:41:32.852512] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.852607] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.852635] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.852660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:19.804 [2024-11-19 11:41:32.853635] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.853673] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.853684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.853695] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.804 [2024-11-19 11:41:32.874321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:19.804 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:19.804 [2024-11-19 11:41:32.875192] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.875292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.875348] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.875362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:19.804 [2024-11-19 11:41:32.876157] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.876183] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.876196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 [2024-11-19 11:41:32.876205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.804 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:19.804 EAL: Scan for (pci) bus failed. 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.804 11:41:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:19.804 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:19.804 Attaching to 0000:00:10.0 00:10:19.804 Attached to 0000:00:10.0 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.804 11:41:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:19.804 Attaching to 0000:00:11.0 00:10:19.804 Attached to 0000:00:11.0 00:10:20.745 QEMU NVMe Ctrl (12340 ): 3885 I/Os completed (+3885) 00:10:20.745 QEMU NVMe Ctrl (12341 ): 3427 I/Os completed (+3427) 00:10:20.745 00:10:21.688 QEMU NVMe Ctrl (12340 ): 6993 I/Os completed (+3108) 00:10:21.688 QEMU NVMe Ctrl (12341 ): 6560 I/Os completed (+3133) 00:10:21.688 00:10:22.632 QEMU NVMe Ctrl (12340 ): 10077 I/Os completed (+3084) 00:10:22.632 QEMU NVMe Ctrl (12341 ): 9662 I/Os completed (+3102) 00:10:22.632 00:10:24.017 QEMU NVMe Ctrl (12340 ): 13125 I/Os completed (+3048) 00:10:24.017 QEMU NVMe Ctrl (12341 ): 12710 I/Os completed (+3048) 00:10:24.017 00:10:24.960 QEMU NVMe Ctrl (12340 ): 16173 I/Os completed (+3048) 00:10:24.960 QEMU NVMe Ctrl (12341 ): 15758 I/Os completed (+3048) 00:10:24.960 00:10:25.903 QEMU NVMe Ctrl (12340 ): 19237 I/Os completed (+3064) 00:10:25.903 QEMU NVMe Ctrl (12341 ): 18818 I/Os completed (+3060) 00:10:25.903 00:10:26.845 QEMU NVMe Ctrl (12340 ): 23563 I/Os completed (+4326) 00:10:26.845 QEMU NVMe Ctrl (12341 ): 23144 I/Os completed (+4326) 00:10:26.845 00:10:27.788 QEMU NVMe Ctrl (12340 ): 27349 I/Os completed (+3786) 00:10:27.788 QEMU NVMe Ctrl (12341 ): 26981 I/Os completed (+3837) 00:10:27.788 00:10:28.733 QEMU NVMe Ctrl (12340 ): 30542 I/Os completed (+3193) 00:10:28.733 QEMU NVMe Ctrl (12341 ): 30193 I/Os completed (+3212) 00:10:28.733 00:10:29.678 QEMU NVMe Ctrl (12340 ): 33984 I/Os completed (+3442) 00:10:29.678 QEMU NVMe Ctrl (12341 ): 33642 I/Os completed (+3449) 00:10:29.678 00:10:30.621 QEMU NVMe Ctrl (12340 ): 37522 I/Os completed (+3538) 00:10:30.621 QEMU NVMe Ctrl (12341 ): 37223 I/Os completed (+3581) 00:10:30.621 00:10:32.008 QEMU NVMe Ctrl (12340 ): 40534 I/Os completed (+3012) 00:10:32.008 QEMU NVMe Ctrl (12341 ): 40252 I/Os completed (+3029) 00:10:32.008 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.008 [2024-11-19 11:41:45.151224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:32.008 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:32.008 [2024-11-19 11:41:45.152449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.152499] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.152514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.152534] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:32.008 [2024-11-19 11:41:45.154663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.154731] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.154747] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.154762] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:32.008 [2024-11-19 11:41:45.175328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:32.008 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:32.008 [2024-11-19 11:41:45.176362] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.176429] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.176447] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.176462] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:32.008 [2024-11-19 11:41:45.177621] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.177663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.177683] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 [2024-11-19 11:41:45.177696] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:32.008 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:32.008 EAL: Scan for (pci) bus failed. 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.008 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:32.268 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:32.268 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.268 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:32.268 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:32.268 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:32.268 Attaching to 0000:00:10.0 00:10:32.268 Attached to 0000:00:10.0 00:10:32.268 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:32.269 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:32.269 11:41:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:32.269 Attaching to 0000:00:11.0 00:10:32.269 Attached to 0000:00:11.0 00:10:32.841 QEMU NVMe Ctrl (12340 ): 1624 I/Os completed (+1624) 00:10:32.841 QEMU NVMe Ctrl (12341 ): 1412 I/Os completed (+1412) 00:10:32.841 00:10:33.785 QEMU NVMe Ctrl (12340 ): 4616 I/Os completed (+2992) 00:10:33.785 QEMU NVMe Ctrl (12341 ): 4411 I/Os completed (+2999) 00:10:33.785 00:10:34.729 QEMU NVMe Ctrl (12340 ): 7659 I/Os completed (+3043) 00:10:34.729 QEMU NVMe Ctrl (12341 ): 7489 I/Os completed (+3078) 00:10:34.729 00:10:35.680 QEMU NVMe Ctrl (12340 ): 10736 I/Os completed (+3077) 00:10:35.680 QEMU NVMe Ctrl (12341 ): 10565 I/Os completed (+3076) 00:10:35.680 00:10:36.622 QEMU NVMe Ctrl (12340 ): 13804 I/Os completed (+3068) 00:10:36.622 QEMU NVMe Ctrl (12341 ): 13630 I/Os completed (+3065) 00:10:36.622 00:10:38.011 QEMU NVMe Ctrl (12340 ): 16880 I/Os completed (+3076) 00:10:38.011 QEMU NVMe Ctrl (12341 ): 16716 I/Os completed (+3086) 00:10:38.011 00:10:38.956 QEMU NVMe Ctrl (12340 ): 19976 I/Os completed (+3096) 00:10:38.956 QEMU NVMe Ctrl (12341 ): 19812 I/Os completed (+3096) 00:10:38.956 00:10:39.899 QEMU NVMe Ctrl (12340 ): 23024 I/Os completed (+3048) 00:10:39.899 QEMU NVMe Ctrl (12341 ): 22867 I/Os completed (+3055) 00:10:39.899 00:10:40.843 QEMU NVMe Ctrl (12340 ): 26076 I/Os completed (+3052) 00:10:40.843 QEMU NVMe Ctrl (12341 ): 25923 I/Os completed (+3056) 00:10:40.843 00:10:41.821 QEMU NVMe Ctrl (12340 ): 29184 I/Os completed (+3108) 00:10:41.821 QEMU NVMe Ctrl (12341 ): 29034 I/Os completed (+3111) 00:10:41.821 00:10:42.772 QEMU NVMe Ctrl (12340 ): 32296 I/Os completed (+3112) 00:10:42.772 QEMU NVMe Ctrl (12341 ): 32146 I/Os completed (+3112) 00:10:42.772 00:10:43.712 QEMU NVMe Ctrl (12340 ): 35433 I/Os completed (+3137) 00:10:43.712 QEMU NVMe Ctrl (12341 ): 35282 I/Os completed (+3136) 00:10:43.712 00:10:44.285 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.286 [2024-11-19 11:41:57.527861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:44.286 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:44.286 [2024-11-19 11:41:57.529024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.529077] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.529093] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.529119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:44.286 [2024-11-19 11:41:57.530884] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.530940] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.530955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.530971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 EAL: Cannot open sysfs resource 00:10:44.286 EAL: pci_scan_one(): cannot parse resource 00:10:44.286 EAL: Scan for (pci) bus failed. 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:44.286 [2024-11-19 11:41:57.556675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:44.286 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:44.286 [2024-11-19 11:41:57.557679] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.557714] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.557733] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.557747] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:44.286 [2024-11-19 11:41:57.558971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.559013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.559030] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 [2024-11-19 11:41:57.559047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:44.286 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:44.286 EAL: Scan for (pci) bus failed. 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.286 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:44.548 Attaching to 0000:00:10.0 00:10:44.548 Attached to 0000:00:10.0 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:44.548 11:41:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:44.548 Attaching to 0000:00:11.0 00:10:44.548 Attached to 0000:00:11.0 00:10:44.548 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:44.548 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:44.548 [2024-11-19 11:41:57.874267] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:56.786 11:42:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:56.786 11:42:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:56.786 11:42:09 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.02 00:10:56.786 11:42:09 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.02 00:10:56.786 11:42:09 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:56.787 11:42:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.02 00:10:56.787 11:42:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.02 2 00:10:56.787 remove_attach_helper took 43.02s to complete (handling 2 nvme drive(s)) 11:42:09 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78434 00:11:03.374 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78434) - No such process 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78434 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78982 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78982 00:11:03.374 11:42:15 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 78982 ']' 00:11:03.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.374 11:42:15 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.374 11:42:15 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.374 11:42:15 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:03.374 11:42:15 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.374 11:42:15 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.374 11:42:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.374 [2024-11-19 11:42:15.961102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:03.374 [2024-11-19 11:42:15.961258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78982 ] 00:11:03.374 [2024-11-19 11:42:16.099238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.374 [2024-11-19 11:42:16.147606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:03.635 11:42:16 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:03.635 11:42:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.239 11:42:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.239 11:42:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.239 11:42:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.239 11:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.239 [2024-11-19 11:42:22.912562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:10.239 [2024-11-19 11:42:22.913620] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:22.913653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:22.913666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:22.913679] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:22.913688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:22.913696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:22.913706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:22.913712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:22.913720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:22.913726] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:22.913734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:22.913740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:23.312554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:10.239 [2024-11-19 11:42:23.313572] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:23.313604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:23.313615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:23.313625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:23.313633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:23.313642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:23.313648] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:23.313655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:23.313662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 [2024-11-19 11:42:23.313672] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.239 [2024-11-19 11:42:23.313678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.239 [2024-11-19 11:42:23.313686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.239 11:42:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.239 11:42:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.239 11:42:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.239 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.240 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:10.501 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:10.501 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:10.501 11:42:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.739 11:42:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.739 11:42:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.739 11:42:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.739 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.739 [2024-11-19 11:42:35.712750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:22.739 [2024-11-19 11:42:35.713949] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.739 [2024-11-19 11:42:35.713978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.739 [2024-11-19 11:42:35.713990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.739 [2024-11-19 11:42:35.714003] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.739 [2024-11-19 11:42:35.714011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.740 [2024-11-19 11:42:35.714018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.740 [2024-11-19 11:42:35.714026] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.740 [2024-11-19 11:42:35.714032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.740 [2024-11-19 11:42:35.714039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.740 [2024-11-19 11:42:35.714045] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.740 [2024-11-19 11:42:35.714052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.740 [2024-11-19 11:42:35.714059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.740 11:42:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.740 11:42:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.740 11:42:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:22.740 11:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.001 [2024-11-19 11:42:36.212756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:23.001 [2024-11-19 11:42:36.213765] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.001 [2024-11-19 11:42:36.213797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.001 [2024-11-19 11:42:36.213807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.001 [2024-11-19 11:42:36.213820] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.001 [2024-11-19 11:42:36.213827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.001 [2024-11-19 11:42:36.213836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.001 [2024-11-19 11:42:36.213842] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.001 [2024-11-19 11:42:36.213849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.001 [2024-11-19 11:42:36.213855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.002 [2024-11-19 11:42:36.213864] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.002 [2024-11-19 11:42:36.213870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.002 [2024-11-19 11:42:36.213878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.002 11:42:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.002 11:42:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.002 11:42:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.002 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.263 11:42:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.500 11:42:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.500 11:42:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.500 11:42:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:35.500 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:35.500 [2024-11-19 11:42:48.613002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:35.500 [2024-11-19 11:42:48.614104] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.500 [2024-11-19 11:42:48.614134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.501 [2024-11-19 11:42:48.614147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.501 [2024-11-19 11:42:48.614158] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.501 [2024-11-19 11:42:48.614167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.501 [2024-11-19 11:42:48.614174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.501 [2024-11-19 11:42:48.614182] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.501 [2024-11-19 11:42:48.614188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.501 [2024-11-19 11:42:48.614197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.501 [2024-11-19 11:42:48.614203] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.501 [2024-11-19 11:42:48.614211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.501 [2024-11-19 11:42:48.614217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.501 11:42:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.501 11:42:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.501 11:42:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:35.501 11:42:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:35.763 [2024-11-19 11:42:49.013009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:35.763 [2024-11-19 11:42:49.014018] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.763 [2024-11-19 11:42:49.014050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.763 [2024-11-19 11:42:49.014059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.763 [2024-11-19 11:42:49.014069] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.763 [2024-11-19 11:42:49.014077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.763 [2024-11-19 11:42:49.014086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.763 [2024-11-19 11:42:49.014093] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.763 [2024-11-19 11:42:49.014100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.763 [2024-11-19 11:42:49.014106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.763 [2024-11-19 11:42:49.014116] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:35.763 [2024-11-19 11:42:49.014122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.763 [2024-11-19 11:42:49.014129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.763 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:35.763 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.763 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.763 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.763 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.763 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.763 11:42:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.763 11:42:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.763 11:42:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.024 11:42:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:48.347 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:48.347 11:43:01 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:48.347 11:43:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.938 11:43:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.938 11:43:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 11:43:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:54.938 11:43:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.938 [2024-11-19 11:43:07.579166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:54.938 [2024-11-19 11:43:07.579948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:07.579979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:07.579991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 [2024-11-19 11:43:07.580004] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:07.580013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:07.580020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 [2024-11-19 11:43:07.580027] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:07.580034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:07.580043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 [2024-11-19 11:43:07.580050] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:07.580058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:07.580064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.938 11:43:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.938 11:43:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.938 [2024-11-19 11:43:08.079176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:54.938 [2024-11-19 11:43:08.079920] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:08.079952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:08.079962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 [2024-11-19 11:43:08.079974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:08.079982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:08.079990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 [2024-11-19 11:43:08.079996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:08.080004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:08.080011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 [2024-11-19 11:43:08.080018] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.938 [2024-11-19 11:43:08.080024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.938 [2024-11-19 11:43:08.080036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.938 11:43:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:54.938 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.200 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:55.200 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.200 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.200 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.200 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.200 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.200 11:43:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.200 11:43:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.462 11:43:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:55.462 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:55.724 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:55.724 11:43:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.955 11:43:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.955 11:43:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 11:43:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.955 11:43:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.955 11:43:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.955 11:43:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.955 [2024-11-19 11:43:20.979448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:07.955 [2024-11-19 11:43:20.980219] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.955 [2024-11-19 11:43:20.980246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.955 [2024-11-19 11:43:20.980257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.955 [2024-11-19 11:43:20.980269] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.955 [2024-11-19 11:43:20.980277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.955 [2024-11-19 11:43:20.980284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.955 [2024-11-19 11:43:20.980292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.955 [2024-11-19 11:43:20.980298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.955 [2024-11-19 11:43:20.980306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.955 [2024-11-19 11:43:20.980312] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:07.955 [2024-11-19 11:43:20.980320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.955 [2024-11-19 11:43:20.980326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:07.955 11:43:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:08.216 [2024-11-19 11:43:21.379453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:08.216 [2024-11-19 11:43:21.380176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.216 [2024-11-19 11:43:21.380209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.216 [2024-11-19 11:43:21.380218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.216 [2024-11-19 11:43:21.380229] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.216 [2024-11-19 11:43:21.380236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.216 [2024-11-19 11:43:21.380245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.216 [2024-11-19 11:43:21.380251] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.216 [2024-11-19 11:43:21.380259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.216 [2024-11-19 11:43:21.380265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.216 [2024-11-19 11:43:21.380273] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.216 [2024-11-19 11:43:21.380279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.216 [2024-11-19 11:43:21.380287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.216 11:43:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.216 11:43:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.216 11:43:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.216 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.477 11:43:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.711 11:43:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.711 11:43:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.711 11:43:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.711 11:43:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.711 11:43:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.711 11:43:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:20.711 11:43:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:20.711 [2024-11-19 11:43:33.879731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:20.711 [2024-11-19 11:43:33.880496] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.711 [2024-11-19 11:43:33.880525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.711 [2024-11-19 11:43:33.880538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.711 [2024-11-19 11:43:33.880550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.711 [2024-11-19 11:43:33.880560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.711 [2024-11-19 11:43:33.880567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.712 [2024-11-19 11:43:33.880575] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.712 [2024-11-19 11:43:33.880581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.712 [2024-11-19 11:43:33.880588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.712 [2024-11-19 11:43:33.880594] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:20.712 [2024-11-19 11:43:33.880604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.712 [2024-11-19 11:43:33.880611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.973 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:20.973 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.973 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.973 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.973 11:43:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.973 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.973 11:43:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.973 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.234 11:43:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.234 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:21.234 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:21.234 [2024-11-19 11:43:34.479741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:21.234 [2024-11-19 11:43:34.480464] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.234 [2024-11-19 11:43:34.480494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.234 [2024-11-19 11:43:34.480503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.234 [2024-11-19 11:43:34.480514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.234 [2024-11-19 11:43:34.480522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.234 [2024-11-19 11:43:34.480530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.234 [2024-11-19 11:43:34.480537] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.234 [2024-11-19 11:43:34.480547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.234 [2024-11-19 11:43:34.480553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.234 [2024-11-19 11:43:34.480561] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.234 [2024-11-19 11:43:34.480567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.234 [2024-11-19 11:43:34.480575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.806 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:21.806 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:21.806 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:21.807 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.807 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.807 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.807 11:43:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.807 11:43:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.807 11:43:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.807 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:21.807 11:43:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:21.807 11:43:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:34.049 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:34.049 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:34.049 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:34.049 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.049 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.049 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.050 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.050 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.72 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.72 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:34.050 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.72 00:12:34.050 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.72 2 00:12:34.050 remove_attach_helper took 45.72s to complete (handling 2 nvme drive(s)) 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:34.050 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78982 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 78982 ']' 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 78982 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78982 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:34.050 killing process with pid 78982 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78982' 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@969 -- # kill 78982 00:12:34.050 11:43:47 sw_hotplug -- common/autotest_common.sh@974 -- # wait 78982 00:12:34.311 11:43:47 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:34.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:35.146 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.146 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:35.146 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.146 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:35.146 00:12:35.146 real 2m29.214s 00:12:35.146 user 1m49.486s 00:12:35.146 sys 0m18.175s 00:12:35.146 11:43:48 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.146 ************************************ 00:12:35.146 11:43:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.146 END TEST sw_hotplug 00:12:35.146 ************************************ 00:12:35.146 11:43:48 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:35.146 11:43:48 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:35.146 11:43:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:35.146 11:43:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.146 11:43:48 -- common/autotest_common.sh@10 -- # set +x 00:12:35.146 ************************************ 00:12:35.146 START TEST nvme_xnvme 00:12:35.146 ************************************ 00:12:35.146 11:43:48 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:35.408 * Looking for test storage... 00:12:35.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.408 11:43:48 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.408 --rc genhtml_branch_coverage=1 00:12:35.408 --rc genhtml_function_coverage=1 00:12:35.408 --rc genhtml_legend=1 00:12:35.408 --rc geninfo_all_blocks=1 00:12:35.408 --rc geninfo_unexecuted_blocks=1 00:12:35.408 00:12:35.408 ' 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.408 --rc genhtml_branch_coverage=1 00:12:35.408 --rc genhtml_function_coverage=1 00:12:35.408 --rc genhtml_legend=1 00:12:35.408 --rc geninfo_all_blocks=1 00:12:35.408 --rc geninfo_unexecuted_blocks=1 00:12:35.408 00:12:35.408 ' 00:12:35.408 11:43:48 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:35.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.408 --rc genhtml_branch_coverage=1 00:12:35.408 --rc genhtml_function_coverage=1 00:12:35.408 --rc genhtml_legend=1 00:12:35.408 --rc geninfo_all_blocks=1 00:12:35.408 --rc geninfo_unexecuted_blocks=1 00:12:35.408 00:12:35.409 ' 00:12:35.409 11:43:48 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:35.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.409 --rc genhtml_branch_coverage=1 00:12:35.409 --rc genhtml_function_coverage=1 00:12:35.409 --rc genhtml_legend=1 00:12:35.409 --rc geninfo_all_blocks=1 00:12:35.409 --rc geninfo_unexecuted_blocks=1 00:12:35.409 00:12:35.409 ' 00:12:35.409 11:43:48 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.409 11:43:48 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.409 11:43:48 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.409 11:43:48 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.409 11:43:48 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.409 11:43:48 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.409 11:43:48 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.409 11:43:48 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.409 11:43:48 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:35.409 11:43:48 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.409 11:43:48 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:35.409 11:43:48 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:35.409 11:43:48 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.409 11:43:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 ************************************ 00:12:35.409 START TEST xnvme_to_malloc_dd_copy 00:12:35.409 ************************************ 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:35.409 11:43:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 { 00:12:35.409 "subsystems": [ 00:12:35.409 { 00:12:35.409 "subsystem": "bdev", 00:12:35.409 "config": [ 00:12:35.409 { 00:12:35.409 "params": { 00:12:35.409 "block_size": 512, 00:12:35.409 "num_blocks": 2097152, 00:12:35.409 "name": "malloc0" 00:12:35.409 }, 00:12:35.409 "method": "bdev_malloc_create" 00:12:35.409 }, 00:12:35.409 { 00:12:35.409 "params": { 00:12:35.409 "io_mechanism": "libaio", 00:12:35.409 "filename": "/dev/nullb0", 00:12:35.409 "name": "null0" 00:12:35.409 }, 00:12:35.409 "method": "bdev_xnvme_create" 00:12:35.409 }, 00:12:35.409 { 00:12:35.409 "method": "bdev_wait_for_examine" 00:12:35.409 } 00:12:35.409 ] 00:12:35.409 } 00:12:35.409 ] 00:12:35.409 } 00:12:35.409 [2024-11-19 11:43:48.805773] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:35.409 [2024-11-19 11:43:48.805907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80359 ] 00:12:35.670 [2024-11-19 11:43:48.942644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.670 [2024-11-19 11:43:48.991469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.058  [2024-11-19T11:43:51.413Z] Copying: 222/1024 [MB] (222 MBps) [2024-11-19T11:43:52.356Z] Copying: 445/1024 [MB] (222 MBps) [2024-11-19T11:43:53.299Z] Copying: 667/1024 [MB] (222 MBps) [2024-11-19T11:43:53.870Z] Copying: 938/1024 [MB] (270 MBps) [2024-11-19T11:43:54.132Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:12:40.720 00:12:40.720 11:43:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:40.720 11:43:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:40.720 11:43:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:40.720 11:43:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:40.720 { 00:12:40.720 "subsystems": [ 00:12:40.720 { 00:12:40.720 "subsystem": "bdev", 00:12:40.720 "config": [ 00:12:40.720 { 00:12:40.720 "params": { 00:12:40.720 "block_size": 512, 00:12:40.720 "num_blocks": 2097152, 00:12:40.720 "name": "malloc0" 00:12:40.720 }, 00:12:40.720 "method": "bdev_malloc_create" 00:12:40.720 }, 00:12:40.720 { 00:12:40.720 "params": { 00:12:40.720 "io_mechanism": "libaio", 00:12:40.720 "filename": "/dev/nullb0", 00:12:40.720 "name": "null0" 00:12:40.720 }, 00:12:40.720 "method": "bdev_xnvme_create" 00:12:40.720 }, 00:12:40.720 { 00:12:40.720 "method": "bdev_wait_for_examine" 00:12:40.720 } 00:12:40.720 ] 00:12:40.720 } 00:12:40.720 ] 00:12:40.720 } 00:12:40.720 [2024-11-19 11:43:53.973309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:40.720 [2024-11-19 11:43:53.973433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80420 ] 00:12:40.720 [2024-11-19 11:43:54.107641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.981 [2024-11-19 11:43:54.140995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.365  [2024-11-19T11:43:56.755Z] Copying: 306/1024 [MB] (306 MBps) [2024-11-19T11:43:57.699Z] Copying: 614/1024 [MB] (308 MBps) [2024-11-19T11:43:57.960Z] Copying: 922/1024 [MB] (308 MBps) [2024-11-19T11:43:58.221Z] Copying: 1024/1024 [MB] (average 307 MBps) 00:12:44.809 00:12:44.809 11:43:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:44.809 11:43:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:44.809 11:43:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:44.809 11:43:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:44.809 11:43:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:44.809 11:43:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:44.809 { 00:12:44.809 "subsystems": [ 00:12:44.809 { 00:12:44.809 "subsystem": "bdev", 00:12:44.809 "config": [ 00:12:44.809 { 00:12:44.809 "params": { 00:12:44.809 "block_size": 512, 00:12:44.809 "num_blocks": 2097152, 00:12:44.809 "name": "malloc0" 00:12:44.809 }, 00:12:44.809 "method": "bdev_malloc_create" 00:12:44.809 }, 00:12:44.809 { 00:12:44.809 "params": { 00:12:44.809 "io_mechanism": "io_uring", 00:12:44.809 "filename": "/dev/nullb0", 00:12:44.809 "name": "null0" 00:12:44.809 }, 00:12:44.809 "method": "bdev_xnvme_create" 00:12:44.809 }, 00:12:44.809 { 00:12:44.809 "method": "bdev_wait_for_examine" 00:12:44.809 } 00:12:44.809 ] 00:12:44.809 } 00:12:44.809 ] 00:12:44.809 } 00:12:44.809 [2024-11-19 11:43:58.112141] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:44.809 [2024-11-19 11:43:58.112251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80476 ] 00:12:45.070 [2024-11-19 11:43:58.245703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.070 [2024-11-19 11:43:58.280354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.452  [2024-11-19T11:44:00.808Z] Copying: 315/1024 [MB] (315 MBps) [2024-11-19T11:44:01.752Z] Copying: 629/1024 [MB] (314 MBps) [2024-11-19T11:44:02.012Z] Copying: 945/1024 [MB] (315 MBps) [2024-11-19T11:44:02.272Z] Copying: 1024/1024 [MB] (average 315 MBps) 00:12:48.860 00:12:48.860 11:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:48.860 11:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:48.860 11:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:48.860 11:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:48.860 { 00:12:48.860 "subsystems": [ 00:12:48.860 { 00:12:48.860 "subsystem": "bdev", 00:12:48.860 "config": [ 00:12:48.860 { 00:12:48.860 "params": { 00:12:48.860 "block_size": 512, 00:12:48.860 "num_blocks": 2097152, 00:12:48.860 "name": "malloc0" 00:12:48.860 }, 00:12:48.860 "method": "bdev_malloc_create" 00:12:48.860 }, 00:12:48.860 { 00:12:48.860 "params": { 00:12:48.860 "io_mechanism": "io_uring", 00:12:48.860 "filename": "/dev/nullb0", 00:12:48.860 "name": "null0" 00:12:48.860 }, 00:12:48.860 "method": "bdev_xnvme_create" 00:12:48.860 }, 00:12:48.860 { 00:12:48.860 "method": "bdev_wait_for_examine" 00:12:48.860 } 00:12:48.860 ] 00:12:48.860 } 00:12:48.860 ] 00:12:48.860 } 00:12:48.860 [2024-11-19 11:44:02.144443] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:48.860 [2024-11-19 11:44:02.144555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80530 ] 00:12:49.122 [2024-11-19 11:44:02.277388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.122 [2024-11-19 11:44:02.307167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.508  [2024-11-19T11:44:04.864Z] Copying: 317/1024 [MB] (317 MBps) [2024-11-19T11:44:05.809Z] Copying: 635/1024 [MB] (318 MBps) [2024-11-19T11:44:05.809Z] Copying: 954/1024 [MB] (318 MBps) [2024-11-19T11:44:06.070Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:52.658 00:12:52.658 11:44:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:52.658 11:44:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:52.919 00:12:52.919 real 0m17.385s 00:12:52.919 user 0m14.443s 00:12:52.919 sys 0m2.432s 00:12:52.919 11:44:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.919 11:44:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 ************************************ 00:12:52.919 END TEST xnvme_to_malloc_dd_copy 00:12:52.919 ************************************ 00:12:52.919 11:44:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:52.919 11:44:06 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:52.919 11:44:06 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.919 11:44:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 ************************************ 00:12:52.919 START TEST xnvme_bdevperf 00:12:52.919 ************************************ 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:52.919 11:44:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.919 { 00:12:52.919 "subsystems": [ 00:12:52.919 { 00:12:52.919 "subsystem": "bdev", 00:12:52.919 "config": [ 00:12:52.919 { 00:12:52.919 "params": { 00:12:52.919 "io_mechanism": "libaio", 00:12:52.919 "filename": "/dev/nullb0", 00:12:52.919 "name": "null0" 00:12:52.919 }, 00:12:52.919 "method": "bdev_xnvme_create" 00:12:52.919 }, 00:12:52.919 { 00:12:52.919 "method": "bdev_wait_for_examine" 00:12:52.919 } 00:12:52.919 ] 00:12:52.919 } 00:12:52.919 ] 00:12:52.919 } 00:12:52.919 [2024-11-19 11:44:06.223589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:52.919 [2024-11-19 11:44:06.223703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80607 ] 00:12:53.180 [2024-11-19 11:44:06.358155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.180 [2024-11-19 11:44:06.401045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.180 Running I/O for 5 seconds... 00:12:55.503 205952.00 IOPS, 804.50 MiB/s [2024-11-19T11:44:09.857Z] 206272.00 IOPS, 805.75 MiB/s [2024-11-19T11:44:10.800Z] 206378.67 IOPS, 806.17 MiB/s [2024-11-19T11:44:11.743Z] 206368.00 IOPS, 806.12 MiB/s 00:12:58.331 Latency(us) 00:12:58.331 [2024-11-19T11:44:11.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.331 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:58.331 null0 : 5.00 206452.15 806.45 0.00 0.00 307.75 111.06 1524.97 00:12:58.331 [2024-11-19T11:44:11.743Z] =================================================================================================================== 00:12:58.331 [2024-11-19T11:44:11.743Z] Total : 206452.15 806.45 0.00 0.00 307.75 111.06 1524.97 00:12:58.331 11:44:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:58.331 11:44:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:58.331 11:44:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:58.331 11:44:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:58.331 11:44:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:58.331 11:44:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.331 { 00:12:58.332 "subsystems": [ 00:12:58.332 { 00:12:58.332 "subsystem": "bdev", 00:12:58.332 "config": [ 00:12:58.332 { 00:12:58.332 "params": { 00:12:58.332 "io_mechanism": "io_uring", 00:12:58.332 "filename": "/dev/nullb0", 00:12:58.332 "name": "null0" 00:12:58.332 }, 00:12:58.332 "method": "bdev_xnvme_create" 00:12:58.332 }, 00:12:58.332 { 00:12:58.332 "method": "bdev_wait_for_examine" 00:12:58.332 } 00:12:58.332 ] 00:12:58.332 } 00:12:58.332 ] 00:12:58.332 } 00:12:58.332 [2024-11-19 11:44:11.703899] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:58.332 [2024-11-19 11:44:11.704005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80672 ] 00:12:58.593 [2024-11-19 11:44:11.837147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.593 [2024-11-19 11:44:11.871635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.593 Running I/O for 5 seconds... 00:13:00.921 235072.00 IOPS, 918.25 MiB/s [2024-11-19T11:44:15.276Z] 234912.00 IOPS, 917.62 MiB/s [2024-11-19T11:44:16.220Z] 234901.33 IOPS, 917.58 MiB/s [2024-11-19T11:44:17.165Z] 234928.00 IOPS, 917.69 MiB/s 00:13:03.753 Latency(us) 00:13:03.753 [2024-11-19T11:44:17.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.753 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:03.754 null0 : 5.00 234906.96 917.61 0.00 0.00 270.40 147.30 1487.16 00:13:03.754 [2024-11-19T11:44:17.166Z] =================================================================================================================== 00:13:03.754 [2024-11-19T11:44:17.166Z] Total : 234906.96 917.61 0.00 0.00 270.40 147.30 1487.16 00:13:03.754 11:44:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:03.754 11:44:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:03.754 00:13:03.754 real 0m10.983s 00:13:03.754 user 0m8.630s 00:13:03.754 sys 0m2.107s 00:13:03.754 ************************************ 00:13:03.754 END TEST xnvme_bdevperf 00:13:03.754 ************************************ 00:13:03.754 11:44:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.754 11:44:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.015 00:13:04.015 real 0m28.631s 00:13:04.015 user 0m23.196s 00:13:04.015 sys 0m4.656s 00:13:04.015 11:44:17 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.015 11:44:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.015 ************************************ 00:13:04.015 END TEST nvme_xnvme 00:13:04.015 ************************************ 00:13:04.015 11:44:17 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:04.015 11:44:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.015 11:44:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.015 11:44:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.015 ************************************ 00:13:04.015 START TEST blockdev_xnvme 00:13:04.015 ************************************ 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:04.015 * Looking for test storage... 00:13:04.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.015 11:44:17 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.015 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:04.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.015 --rc genhtml_branch_coverage=1 00:13:04.015 --rc genhtml_function_coverage=1 00:13:04.015 --rc genhtml_legend=1 00:13:04.016 --rc geninfo_all_blocks=1 00:13:04.016 --rc geninfo_unexecuted_blocks=1 00:13:04.016 00:13:04.016 ' 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.016 --rc genhtml_branch_coverage=1 00:13:04.016 --rc genhtml_function_coverage=1 00:13:04.016 --rc genhtml_legend=1 00:13:04.016 --rc geninfo_all_blocks=1 00:13:04.016 --rc geninfo_unexecuted_blocks=1 00:13:04.016 00:13:04.016 ' 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.016 --rc genhtml_branch_coverage=1 00:13:04.016 --rc genhtml_function_coverage=1 00:13:04.016 --rc genhtml_legend=1 00:13:04.016 --rc geninfo_all_blocks=1 00:13:04.016 --rc geninfo_unexecuted_blocks=1 00:13:04.016 00:13:04.016 ' 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.016 --rc genhtml_branch_coverage=1 00:13:04.016 --rc genhtml_function_coverage=1 00:13:04.016 --rc genhtml_legend=1 00:13:04.016 --rc geninfo_all_blocks=1 00:13:04.016 --rc geninfo_unexecuted_blocks=1 00:13:04.016 00:13:04.016 ' 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80811 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80811 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 80811 ']' 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.016 11:44:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.016 11:44:17 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:04.278 [2024-11-19 11:44:17.513843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:04.278 [2024-11-19 11:44:17.514004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80811 ] 00:13:04.278 [2024-11-19 11:44:17.650356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.539 [2024-11-19 11:44:17.694188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.111 11:44:18 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.112 11:44:18 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:05.112 11:44:18 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:05.112 11:44:18 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:05.112 11:44:18 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:05.112 11:44:18 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:05.112 11:44:18 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:05.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:05.635 Waiting for block devices as requested 00:13:05.635 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:05.635 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:05.635 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:05.896 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.187 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:11.187 nvme0n1 00:13:11.187 nvme1n1 00:13:11.187 nvme2n1 00:13:11.187 nvme2n2 00:13:11.187 nvme2n3 00:13:11.187 nvme3n1 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:11.187 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:11.187 11:44:24 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "279dbd4e-175b-41ce-8ae9-cd6fba4b45f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "279dbd4e-175b-41ce-8ae9-cd6fba4b45f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f4c18ff7-513c-426b-812d-6c1ce71947ea"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f4c18ff7-513c-426b-812d-6c1ce71947ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a042de33-80c8-4306-a0a4-e4de15e26e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a042de33-80c8-4306-a0a4-e4de15e26e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c2296c32-1514-4ced-9bc4-4f9a38bd9b78"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2296c32-1514-4ced-9bc4-4f9a38bd9b78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "8a7d53c3-1815-4a76-8da4-a1559e36b296"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a7d53c3-1815-4a76-8da4-a1559e36b296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "03f92a31-06d6-42f3-938f-dc6194ecde83"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "03f92a31-06d6-42f3-938f-dc6194ecde83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:11.188 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80811 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 80811 ']' 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 80811 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80811 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.188 killing process with pid 80811 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80811' 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 80811 00:13:11.188 11:44:24 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 80811 00:13:11.449 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:11.449 11:44:24 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:11.449 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:11.449 11:44:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.449 11:44:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.449 ************************************ 00:13:11.449 START TEST bdev_hello_world 00:13:11.449 ************************************ 00:13:11.449 11:44:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:11.449 [2024-11-19 11:44:24.688131] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:11.449 [2024-11-19 11:44:24.688242] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81159 ] 00:13:11.449 [2024-11-19 11:44:24.822126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.449 [2024-11-19 11:44:24.857437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.711 [2024-11-19 11:44:25.015695] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:11.711 [2024-11-19 11:44:25.015741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:11.711 [2024-11-19 11:44:25.015755] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:11.711 [2024-11-19 11:44:25.017254] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:11.711 [2024-11-19 11:44:25.017621] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:11.711 [2024-11-19 11:44:25.017646] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:11.711 [2024-11-19 11:44:25.017878] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:11.711 00:13:11.711 [2024-11-19 11:44:25.017905] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:11.971 00:13:11.971 real 0m0.522s 00:13:11.971 user 0m0.270s 00:13:11.971 sys 0m0.143s 00:13:11.971 11:44:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.971 ************************************ 00:13:11.971 END TEST bdev_hello_world 00:13:11.971 ************************************ 00:13:11.971 11:44:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:11.972 11:44:25 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:11.972 11:44:25 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.972 11:44:25 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.972 11:44:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.972 ************************************ 00:13:11.972 START TEST bdev_bounds 00:13:11.972 ************************************ 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:11.972 Process bdevio pid: 81184 00:13:11.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81184 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81184' 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81184 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81184 ']' 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:11.972 11:44:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:11.972 [2024-11-19 11:44:25.271454] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:11.972 [2024-11-19 11:44:25.271567] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81184 ] 00:13:12.233 [2024-11-19 11:44:25.406469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:12.233 [2024-11-19 11:44:25.448297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.233 [2024-11-19 11:44:25.448517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.233 [2024-11-19 11:44:25.448564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.808 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.808 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:12.808 11:44:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:12.808 I/O targets: 00:13:12.808 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:12.808 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:12.808 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:12.808 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:12.808 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:12.808 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:12.808 00:13:12.808 00:13:12.808 CUnit - A unit testing framework for C - Version 2.1-3 00:13:12.808 http://cunit.sourceforge.net/ 00:13:12.808 00:13:12.808 00:13:12.808 Suite: bdevio tests on: nvme3n1 00:13:12.808 Test: blockdev write read block ...passed 00:13:12.808 Test: blockdev write zeroes read block ...passed 00:13:12.808 Test: blockdev write zeroes read no split ...passed 00:13:13.090 Test: blockdev write zeroes read split ...passed 00:13:13.090 Test: blockdev write zeroes read split partial ...passed 00:13:13.090 Test: blockdev reset ...passed 00:13:13.090 Test: blockdev write read 8 blocks ...passed 00:13:13.090 Test: blockdev write read size > 128k ...passed 00:13:13.090 Test: blockdev write read invalid size ...passed 00:13:13.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.090 Test: blockdev write read max offset ...passed 00:13:13.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.090 Test: blockdev writev readv 8 blocks ...passed 00:13:13.090 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.090 Test: blockdev writev readv block ...passed 00:13:13.090 Test: blockdev writev readv size > 128k ...passed 00:13:13.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.090 Test: blockdev comparev and writev ...passed 00:13:13.090 Test: blockdev nvme passthru rw ...passed 00:13:13.090 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.090 Test: blockdev nvme admin passthru ...passed 00:13:13.090 Test: blockdev copy ...passed 00:13:13.090 Suite: bdevio tests on: nvme2n3 00:13:13.090 Test: blockdev write read block ...passed 00:13:13.090 Test: blockdev write zeroes read block ...passed 00:13:13.090 Test: blockdev write zeroes read no split ...passed 00:13:13.090 Test: blockdev write zeroes read split ...passed 00:13:13.090 Test: blockdev write zeroes read split partial ...passed 00:13:13.090 Test: blockdev reset ...passed 00:13:13.090 Test: blockdev write read 8 blocks ...passed 00:13:13.090 Test: blockdev write read size > 128k ...passed 00:13:13.090 Test: blockdev write read invalid size ...passed 00:13:13.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.090 Test: blockdev write read max offset ...passed 00:13:13.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.090 Test: blockdev writev readv 8 blocks ...passed 00:13:13.090 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.090 Test: blockdev writev readv block ...passed 00:13:13.090 Test: blockdev writev readv size > 128k ...passed 00:13:13.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.090 Test: blockdev comparev and writev ...passed 00:13:13.090 Test: blockdev nvme passthru rw ...passed 00:13:13.090 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.090 Test: blockdev nvme admin passthru ...passed 00:13:13.090 Test: blockdev copy ...passed 00:13:13.090 Suite: bdevio tests on: nvme2n2 00:13:13.090 Test: blockdev write read block ...passed 00:13:13.090 Test: blockdev write zeroes read block ...passed 00:13:13.090 Test: blockdev write zeroes read no split ...passed 00:13:13.090 Test: blockdev write zeroes read split ...passed 00:13:13.090 Test: blockdev write zeroes read split partial ...passed 00:13:13.090 Test: blockdev reset ...passed 00:13:13.090 Test: blockdev write read 8 blocks ...passed 00:13:13.090 Test: blockdev write read size > 128k ...passed 00:13:13.090 Test: blockdev write read invalid size ...passed 00:13:13.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.090 Test: blockdev write read max offset ...passed 00:13:13.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.090 Test: blockdev writev readv 8 blocks ...passed 00:13:13.090 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.090 Test: blockdev writev readv block ...passed 00:13:13.090 Test: blockdev writev readv size > 128k ...passed 00:13:13.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.090 Test: blockdev comparev and writev ...passed 00:13:13.090 Test: blockdev nvme passthru rw ...passed 00:13:13.090 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.090 Test: blockdev nvme admin passthru ...passed 00:13:13.090 Test: blockdev copy ...passed 00:13:13.090 Suite: bdevio tests on: nvme2n1 00:13:13.090 Test: blockdev write read block ...passed 00:13:13.090 Test: blockdev write zeroes read block ...passed 00:13:13.090 Test: blockdev write zeroes read no split ...passed 00:13:13.090 Test: blockdev write zeroes read split ...passed 00:13:13.090 Test: blockdev write zeroes read split partial ...passed 00:13:13.090 Test: blockdev reset ...passed 00:13:13.090 Test: blockdev write read 8 blocks ...passed 00:13:13.090 Test: blockdev write read size > 128k ...passed 00:13:13.090 Test: blockdev write read invalid size ...passed 00:13:13.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.090 Test: blockdev write read max offset ...passed 00:13:13.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.090 Test: blockdev writev readv 8 blocks ...passed 00:13:13.090 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.090 Test: blockdev writev readv block ...passed 00:13:13.090 Test: blockdev writev readv size > 128k ...passed 00:13:13.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.090 Test: blockdev comparev and writev ...passed 00:13:13.090 Test: blockdev nvme passthru rw ...passed 00:13:13.090 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.090 Test: blockdev nvme admin passthru ...passed 00:13:13.090 Test: blockdev copy ...passed 00:13:13.090 Suite: bdevio tests on: nvme1n1 00:13:13.090 Test: blockdev write read block ...passed 00:13:13.090 Test: blockdev write zeroes read block ...passed 00:13:13.090 Test: blockdev write zeroes read no split ...passed 00:13:13.090 Test: blockdev write zeroes read split ...passed 00:13:13.090 Test: blockdev write zeroes read split partial ...passed 00:13:13.090 Test: blockdev reset ...passed 00:13:13.090 Test: blockdev write read 8 blocks ...passed 00:13:13.090 Test: blockdev write read size > 128k ...passed 00:13:13.090 Test: blockdev write read invalid size ...passed 00:13:13.090 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.090 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.090 Test: blockdev write read max offset ...passed 00:13:13.090 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.090 Test: blockdev writev readv 8 blocks ...passed 00:13:13.090 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.090 Test: blockdev writev readv block ...passed 00:13:13.090 Test: blockdev writev readv size > 128k ...passed 00:13:13.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.090 Test: blockdev comparev and writev ...passed 00:13:13.090 Test: blockdev nvme passthru rw ...passed 00:13:13.090 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.090 Test: blockdev nvme admin passthru ...passed 00:13:13.090 Test: blockdev copy ...passed 00:13:13.090 Suite: bdevio tests on: nvme0n1 00:13:13.090 Test: blockdev write read block ...passed 00:13:13.090 Test: blockdev write zeroes read block ...passed 00:13:13.090 Test: blockdev write zeroes read no split ...passed 00:13:13.090 Test: blockdev write zeroes read split ...passed 00:13:13.090 Test: blockdev write zeroes read split partial ...passed 00:13:13.090 Test: blockdev reset ...passed 00:13:13.090 Test: blockdev write read 8 blocks ...passed 00:13:13.090 Test: blockdev write read size > 128k ...passed 00:13:13.090 Test: blockdev write read invalid size ...passed 00:13:13.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:13.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:13.091 Test: blockdev write read max offset ...passed 00:13:13.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:13.091 Test: blockdev writev readv 8 blocks ...passed 00:13:13.091 Test: blockdev writev readv 30 x 1block ...passed 00:13:13.091 Test: blockdev writev readv block ...passed 00:13:13.091 Test: blockdev writev readv size > 128k ...passed 00:13:13.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:13.091 Test: blockdev comparev and writev ...passed 00:13:13.091 Test: blockdev nvme passthru rw ...passed 00:13:13.091 Test: blockdev nvme passthru vendor specific ...passed 00:13:13.091 Test: blockdev nvme admin passthru ...passed 00:13:13.091 Test: blockdev copy ...passed 00:13:13.091 00:13:13.091 Run Summary: Type Total Ran Passed Failed Inactive 00:13:13.091 suites 6 6 n/a 0 0 00:13:13.091 tests 138 138 138 0 0 00:13:13.091 asserts 780 780 780 0 n/a 00:13:13.091 00:13:13.091 Elapsed time = 0.385 seconds 00:13:13.091 0 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81184 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81184 ']' 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81184 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81184 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.091 killing process with pid 81184 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81184' 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81184 00:13:13.091 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81184 00:13:13.355 11:44:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:13.356 00:13:13.356 real 0m1.383s 00:13:13.356 user 0m3.473s 00:13:13.356 sys 0m0.278s 00:13:13.356 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.356 ************************************ 00:13:13.356 END TEST bdev_bounds 00:13:13.356 ************************************ 00:13:13.356 11:44:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:13.356 11:44:26 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:13.356 11:44:26 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:13.356 11:44:26 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.356 11:44:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.356 ************************************ 00:13:13.356 START TEST bdev_nbd 00:13:13.356 ************************************ 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81236 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81236 /var/tmp/spdk-nbd.sock 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81236 ']' 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:13.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:13.356 11:44:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:13.356 [2024-11-19 11:44:26.735031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:13.356 [2024-11-19 11:44:26.735181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.617 [2024-11-19 11:44:26.865153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.617 [2024-11-19 11:44:26.915602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.560 1+0 records in 00:13:14.560 1+0 records out 00:13:14.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824747 s, 5.0 MB/s 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.560 11:44:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.822 1+0 records in 00:13:14.822 1+0 records out 00:13:14.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106196 s, 3.9 MB/s 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.822 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.084 1+0 records in 00:13:15.084 1+0 records out 00:13:15.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012339 s, 3.3 MB/s 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:15.084 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.347 1+0 records in 00:13:15.347 1+0 records out 00:13:15.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702603 s, 5.8 MB/s 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:15.347 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.608 1+0 records in 00:13:15.608 1+0 records out 00:13:15.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102007 s, 4.0 MB/s 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:15.608 11:44:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:15.869 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.870 1+0 records in 00:13:15.870 1+0 records out 00:13:15.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121587 s, 3.4 MB/s 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:15.870 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd0", 00:13:16.130 "bdev_name": "nvme0n1" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd1", 00:13:16.130 "bdev_name": "nvme1n1" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd2", 00:13:16.130 "bdev_name": "nvme2n1" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd3", 00:13:16.130 "bdev_name": "nvme2n2" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd4", 00:13:16.130 "bdev_name": "nvme2n3" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd5", 00:13:16.130 "bdev_name": "nvme3n1" 00:13:16.130 } 00:13:16.130 ]' 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd0", 00:13:16.130 "bdev_name": "nvme0n1" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd1", 00:13:16.130 "bdev_name": "nvme1n1" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd2", 00:13:16.130 "bdev_name": "nvme2n1" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd3", 00:13:16.130 "bdev_name": "nvme2n2" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd4", 00:13:16.130 "bdev_name": "nvme2n3" 00:13:16.130 }, 00:13:16.130 { 00:13:16.130 "nbd_device": "/dev/nbd5", 00:13:16.130 "bdev_name": "nvme3n1" 00:13:16.130 } 00:13:16.130 ]' 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.130 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.392 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.654 11:44:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.917 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:17.177 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:17.177 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:17.177 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.178 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.439 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.700 11:44:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:17.961 /dev/nbd0 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.961 1+0 records in 00:13:17.961 1+0 records out 00:13:17.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530877 s, 7.7 MB/s 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.961 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:18.223 /dev/nbd1 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.223 1+0 records in 00:13:18.223 1+0 records out 00:13:18.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057042 s, 7.2 MB/s 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:18.223 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:18.485 /dev/nbd10 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.485 1+0 records in 00:13:18.485 1+0 records out 00:13:18.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443798 s, 9.2 MB/s 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:18.485 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:18.747 /dev/nbd11 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.747 1+0 records in 00:13:18.747 1+0 records out 00:13:18.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123607 s, 3.3 MB/s 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:18.747 11:44:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:18.747 /dev/nbd12 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.010 1+0 records in 00:13:19.010 1+0 records out 00:13:19.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118533 s, 3.5 MB/s 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:19.010 /dev/nbd13 00:13:19.010 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:19.271 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:19.271 1+0 records in 00:13:19.272 1+0 records out 00:13:19.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873601 s, 4.7 MB/s 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd0", 00:13:19.272 "bdev_name": "nvme0n1" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd1", 00:13:19.272 "bdev_name": "nvme1n1" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd10", 00:13:19.272 "bdev_name": "nvme2n1" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd11", 00:13:19.272 "bdev_name": "nvme2n2" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd12", 00:13:19.272 "bdev_name": "nvme2n3" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd13", 00:13:19.272 "bdev_name": "nvme3n1" 00:13:19.272 } 00:13:19.272 ]' 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:19.272 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd0", 00:13:19.272 "bdev_name": "nvme0n1" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd1", 00:13:19.272 "bdev_name": "nvme1n1" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd10", 00:13:19.272 "bdev_name": "nvme2n1" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd11", 00:13:19.272 "bdev_name": "nvme2n2" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd12", 00:13:19.272 "bdev_name": "nvme2n3" 00:13:19.272 }, 00:13:19.272 { 00:13:19.272 "nbd_device": "/dev/nbd13", 00:13:19.272 "bdev_name": "nvme3n1" 00:13:19.272 } 00:13:19.272 ]' 00:13:19.533 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:19.533 /dev/nbd1 00:13:19.533 /dev/nbd10 00:13:19.533 /dev/nbd11 00:13:19.533 /dev/nbd12 00:13:19.533 /dev/nbd13' 00:13:19.533 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:19.533 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:19.533 /dev/nbd1 00:13:19.533 /dev/nbd10 00:13:19.533 /dev/nbd11 00:13:19.533 /dev/nbd12 00:13:19.533 /dev/nbd13' 00:13:19.533 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:19.534 256+0 records in 00:13:19.534 256+0 records out 00:13:19.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00891464 s, 118 MB/s 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.534 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:19.795 256+0 records in 00:13:19.795 256+0 records out 00:13:19.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232779 s, 4.5 MB/s 00:13:19.795 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:19.795 11:44:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:20.056 256+0 records in 00:13:20.056 256+0 records out 00:13:20.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.300437 s, 3.5 MB/s 00:13:20.056 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:20.056 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:20.056 256+0 records in 00:13:20.056 256+0 records out 00:13:20.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129335 s, 8.1 MB/s 00:13:20.056 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:20.056 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:20.317 256+0 records in 00:13:20.317 256+0 records out 00:13:20.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.200952 s, 5.2 MB/s 00:13:20.317 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:20.317 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:20.579 256+0 records in 00:13:20.579 256+0 records out 00:13:20.579 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229007 s, 4.6 MB/s 00:13:20.579 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:20.579 11:44:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:20.840 256+0 records in 00:13:20.840 256+0 records out 00:13:20.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236502 s, 4.4 MB/s 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:20.840 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.841 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.101 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.102 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.102 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.362 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.623 11:44:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.623 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:21.884 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.145 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:22.407 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:22.668 malloc_lvol_verify 00:13:22.668 11:44:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:22.668 470880ec-441a-4b0d-b56a-c28a3bcf260b 00:13:22.929 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:22.929 f60279f4-0eb9-4066-b13f-b5908231f0e2 00:13:22.929 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:23.191 /dev/nbd0 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:23.191 mke2fs 1.47.0 (5-Feb-2023) 00:13:23.191 Discarding device blocks: 0/4096 done 00:13:23.191 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:23.191 00:13:23.191 Allocating group tables: 0/1 done 00:13:23.191 Writing inode tables: 0/1 done 00:13:23.191 Creating journal (1024 blocks): done 00:13:23.191 Writing superblocks and filesystem accounting information: 0/1 done 00:13:23.191 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.191 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81236 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81236 ']' 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81236 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81236 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:23.452 killing process with pid 81236 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81236' 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81236 00:13:23.452 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81236 00:13:23.715 11:44:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:23.715 00:13:23.715 real 0m10.226s 00:13:23.715 user 0m13.905s 00:13:23.715 sys 0m3.800s 00:13:23.715 ************************************ 00:13:23.715 END TEST bdev_nbd 00:13:23.715 ************************************ 00:13:23.715 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.715 11:44:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:23.715 11:44:36 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:23.715 11:44:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:23.715 11:44:36 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:23.715 11:44:36 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:23.715 11:44:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:23.715 11:44:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.715 11:44:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.715 ************************************ 00:13:23.715 START TEST bdev_fio 00:13:23.715 ************************************ 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:23.715 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.715 11:44:36 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:23.715 ************************************ 00:13:23.715 START TEST bdev_fio_rw_verify 00:13:23.715 ************************************ 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:23.715 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:23.716 11:44:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.977 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.977 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.977 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.977 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.977 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.977 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.977 fio-3.35 00:13:23.977 Starting 6 threads 00:13:36.236 00:13:36.236 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81633: Tue Nov 19 11:44:47 2024 00:13:36.236 read: IOPS=15.1k, BW=59.1MiB/s (62.0MB/s)(591MiB/10001msec) 00:13:36.236 slat (usec): min=2, max=3293, avg= 6.36, stdev=20.67 00:13:36.236 clat (usec): min=92, max=7306, avg=1226.55, stdev=816.10 00:13:36.236 lat (usec): min=95, max=7312, avg=1232.91, stdev=816.97 00:13:36.236 clat percentiles (usec): 00:13:36.236 | 50.000th=[ 1106], 99.000th=[ 3752], 99.900th=[ 5342], 99.990th=[ 6063], 00:13:36.236 | 99.999th=[ 7308] 00:13:36.236 write: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(608MiB/10001msec); 0 zone resets 00:13:36.236 slat (usec): min=9, max=5989, avg=41.12, stdev=149.06 00:13:36.236 clat (usec): min=73, max=11643, avg=1577.12, stdev=1058.25 00:13:36.236 lat (usec): min=87, max=11684, avg=1618.24, stdev=1072.68 00:13:36.236 clat percentiles (usec): 00:13:36.236 | 50.000th=[ 1385], 99.000th=[ 5211], 99.900th=[ 7767], 99.990th=[ 9372], 00:13:36.236 | 99.999th=[11469] 00:13:36.236 bw ( KiB/s): min=44061, max=105565, per=100.00%, avg=63000.89, stdev=3145.64, samples=114 00:13:36.236 iops : min=11013, max=26390, avg=15749.16, stdev=786.38, samples=114 00:13:36.236 lat (usec) : 100=0.04%, 250=4.18%, 500=11.90%, 750=12.36%, 1000=10.71% 00:13:36.236 lat (msec) : 2=39.52%, 4=19.42%, 10=1.87%, 20=0.01% 00:13:36.236 cpu : usr=44.40%, sys=31.76%, ctx=5262, majf=0, minf=17059 00:13:36.236 IO depths : 1=11.2%, 2=23.5%, 4=51.3%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:36.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.236 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.236 issued rwts: total=151413,155696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.236 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:36.236 00:13:36.236 Run status group 0 (all jobs): 00:13:36.236 READ: bw=59.1MiB/s (62.0MB/s), 59.1MiB/s-59.1MiB/s (62.0MB/s-62.0MB/s), io=591MiB (620MB), run=10001-10001msec 00:13:36.236 WRITE: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=608MiB (638MB), run=10001-10001msec 00:13:36.236 ----------------------------------------------------- 00:13:36.236 Suppressions used: 00:13:36.236 count bytes template 00:13:36.236 6 48 /usr/src/fio/parse.c 00:13:36.236 4174 400704 /usr/src/fio/iolog.c 00:13:36.236 1 8 libtcmalloc_minimal.so 00:13:36.236 1 904 libcrypto.so 00:13:36.236 ----------------------------------------------------- 00:13:36.236 00:13:36.236 00:13:36.236 real 0m11.070s 00:13:36.236 user 0m27.336s 00:13:36.236 sys 0m19.347s 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.236 ************************************ 00:13:36.236 END TEST bdev_fio_rw_verify 00:13:36.236 ************************************ 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:36.236 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "279dbd4e-175b-41ce-8ae9-cd6fba4b45f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "279dbd4e-175b-41ce-8ae9-cd6fba4b45f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f4c18ff7-513c-426b-812d-6c1ce71947ea"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f4c18ff7-513c-426b-812d-6c1ce71947ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a042de33-80c8-4306-a0a4-e4de15e26e1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a042de33-80c8-4306-a0a4-e4de15e26e1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c2296c32-1514-4ced-9bc4-4f9a38bd9b78"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c2296c32-1514-4ced-9bc4-4f9a38bd9b78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "ali 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:36.237 ases": [' ' "8a7d53c3-1815-4a76-8da4-a1559e36b296"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a7d53c3-1815-4a76-8da4-a1559e36b296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "03f92a31-06d6-42f3-938f-dc6194ecde83"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "03f92a31-06d6-42f3-938f-dc6194ecde83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:36.237 /home/vagrant/spdk_repo/spdk 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:36.237 00:13:36.237 real 0m11.243s 00:13:36.237 user 0m27.405s 00:13:36.237 sys 0m19.427s 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.237 11:44:48 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:36.237 ************************************ 00:13:36.237 END TEST bdev_fio 00:13:36.237 ************************************ 00:13:36.237 11:44:48 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:36.237 11:44:48 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:36.237 11:44:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:36.237 11:44:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.237 11:44:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.237 ************************************ 00:13:36.237 START TEST bdev_verify 00:13:36.237 ************************************ 00:13:36.237 11:44:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:36.237 [2024-11-19 11:44:48.320479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:36.237 [2024-11-19 11:44:48.320617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81804 ] 00:13:36.237 [2024-11-19 11:44:48.457606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:36.237 [2024-11-19 11:44:48.532635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.237 [2024-11-19 11:44:48.532740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.237 Running I/O for 5 seconds... 00:13:37.751 24499.00 IOPS, 95.70 MiB/s [2024-11-19T11:44:52.107Z] 23945.00 IOPS, 93.54 MiB/s [2024-11-19T11:44:53.051Z] 24078.00 IOPS, 94.05 MiB/s [2024-11-19T11:44:53.994Z] 24386.50 IOPS, 95.26 MiB/s [2024-11-19T11:44:53.994Z] 24322.00 IOPS, 95.01 MiB/s 00:13:40.582 Latency(us) 00:13:40.582 [2024-11-19T11:44:53.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.582 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x0 length 0xa0000 00:13:40.582 nvme0n1 : 5.04 1752.47 6.85 0.00 0.00 72897.45 6906.49 68157.44 00:13:40.582 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0xa0000 length 0xa0000 00:13:40.582 nvme0n1 : 5.04 2106.06 8.23 0.00 0.00 60667.42 10132.87 74610.22 00:13:40.582 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x0 length 0xbd0bd 00:13:40.582 nvme1n1 : 5.07 2282.87 8.92 0.00 0.00 55711.96 7108.14 56865.08 00:13:40.582 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:40.582 nvme1n1 : 5.04 2556.69 9.99 0.00 0.00 49829.38 5091.64 64527.75 00:13:40.582 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x0 length 0x80000 00:13:40.582 nvme2n1 : 5.07 1741.33 6.80 0.00 0.00 72940.64 12502.25 71383.83 00:13:40.582 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x80000 length 0x80000 00:13:40.582 nvme2n1 : 5.03 2111.91 8.25 0.00 0.00 60293.39 12603.08 65334.35 00:13:40.582 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x0 length 0x80000 00:13:40.582 nvme2n2 : 5.08 1765.09 6.89 0.00 0.00 71799.31 7057.72 62511.26 00:13:40.582 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x80000 length 0x80000 00:13:40.582 nvme2n2 : 5.05 2129.70 8.32 0.00 0.00 59669.74 8116.38 59284.87 00:13:40.582 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x0 length 0x80000 00:13:40.582 nvme2n3 : 5.08 1763.01 6.89 0.00 0.00 71767.23 6755.25 69770.63 00:13:40.582 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x80000 length 0x80000 00:13:40.582 nvme2n3 : 5.05 2103.64 8.22 0.00 0.00 60306.20 6956.90 60494.77 00:13:40.582 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x0 length 0x20000 00:13:40.582 nvme3n1 : 5.08 1763.97 6.89 0.00 0.00 71671.86 7511.43 63317.86 00:13:40.582 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:40.582 Verification LBA range: start 0x20000 length 0x20000 00:13:40.582 nvme3n1 : 5.06 2151.48 8.40 0.00 0.00 58864.07 2155.13 64931.05 00:13:40.582 [2024-11-19T11:44:53.994Z] =================================================================================================================== 00:13:40.582 [2024-11-19T11:44:53.994Z] Total : 24228.22 94.64 0.00 0.00 62945.38 2155.13 74610.22 00:13:40.842 00:13:40.842 real 0m5.949s 00:13:40.842 user 0m9.482s 00:13:40.842 sys 0m1.468s 00:13:40.842 11:44:54 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.842 ************************************ 00:13:40.842 END TEST bdev_verify 00:13:40.842 ************************************ 00:13:40.842 11:44:54 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:41.103 11:44:54 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:41.103 11:44:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:41.103 11:44:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.103 11:44:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.103 ************************************ 00:13:41.103 START TEST bdev_verify_big_io 00:13:41.103 ************************************ 00:13:41.103 11:44:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:41.103 [2024-11-19 11:44:54.352756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:41.103 [2024-11-19 11:44:54.352906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81894 ] 00:13:41.103 [2024-11-19 11:44:54.489978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:41.364 [2024-11-19 11:44:54.539468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.364 [2024-11-19 11:44:54.539571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.624 Running I/O for 5 seconds... 00:13:47.217 1688.00 IOPS, 105.50 MiB/s [2024-11-19T11:45:01.201Z] 2446.00 IOPS, 152.88 MiB/s [2024-11-19T11:45:01.461Z] 3003.67 IOPS, 187.73 MiB/s 00:13:48.049 Latency(us) 00:13:48.049 [2024-11-19T11:45:01.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.049 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x0 length 0xa000 00:13:48.049 nvme0n1 : 5.90 86.75 5.42 0.00 0.00 1374366.33 47185.92 1690627.15 00:13:48.049 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0xa000 length 0xa000 00:13:48.049 nvme0n1 : 5.72 111.87 6.99 0.00 0.00 1100869.95 258111.02 1819682.66 00:13:48.049 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x0 length 0xbd0b 00:13:48.049 nvme1n1 : 5.96 107.46 6.72 0.00 0.00 1071611.39 10637.00 1316366.18 00:13:48.049 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:48.049 nvme1n1 : 5.80 172.10 10.76 0.00 0.00 690281.30 21173.17 1238932.87 00:13:48.049 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x0 length 0x8000 00:13:48.049 nvme2n1 : 5.98 86.97 5.44 0.00 0.00 1258474.24 70577.23 1290555.08 00:13:48.049 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x8000 length 0x8000 00:13:48.049 nvme2n1 : 5.72 139.76 8.73 0.00 0.00 822457.90 17241.01 758201.11 00:13:48.049 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x0 length 0x8000 00:13:48.049 nvme2n2 : 6.04 115.98 7.25 0.00 0.00 902184.64 20669.05 1987454.82 00:13:48.049 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x8000 length 0x8000 00:13:48.049 nvme2n2 : 5.81 126.74 7.92 0.00 0.00 900882.46 81466.29 1103424.59 00:13:48.049 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:48.049 Verification LBA range: start 0x0 length 0x8000 00:13:48.050 nvme2n3 : 6.30 139.71 8.73 0.00 0.00 718672.36 20769.87 3639365.32 00:13:48.050 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:48.050 Verification LBA range: start 0x8000 length 0x8000 00:13:48.050 nvme2n3 : 5.82 182.83 11.43 0.00 0.00 613326.13 11292.36 796917.76 00:13:48.050 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:48.050 Verification LBA range: start 0x0 length 0x2000 00:13:48.050 nvme3n1 : 6.47 215.09 13.44 0.00 0.00 447716.68 1260.31 3148954.39 00:13:48.050 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:48.050 Verification LBA range: start 0x2000 length 0x2000 00:13:48.050 nvme3n1 : 5.83 126.29 7.89 0.00 0.00 873449.12 4335.46 2594015.70 00:13:48.050 [2024-11-19T11:45:01.462Z] =================================================================================================================== 00:13:48.050 [2024-11-19T11:45:01.462Z] Total : 1611.54 100.72 0.00 0.00 825626.07 1260.31 3639365.32 00:13:48.354 00:13:48.354 real 0m7.279s 00:13:48.354 user 0m13.389s 00:13:48.354 sys 0m0.459s 00:13:48.354 11:45:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.354 11:45:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.354 ************************************ 00:13:48.354 END TEST bdev_verify_big_io 00:13:48.354 ************************************ 00:13:48.354 11:45:01 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.354 11:45:01 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:48.354 11:45:01 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.354 11:45:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.354 ************************************ 00:13:48.354 START TEST bdev_write_zeroes 00:13:48.354 ************************************ 00:13:48.354 11:45:01 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.354 [2024-11-19 11:45:01.702689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:48.354 [2024-11-19 11:45:01.702845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81998 ] 00:13:48.637 [2024-11-19 11:45:01.839776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.637 [2024-11-19 11:45:01.890354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.899 Running I/O for 1 seconds... 00:13:49.845 83328.00 IOPS, 325.50 MiB/s 00:13:49.845 Latency(us) 00:13:49.845 [2024-11-19T11:45:03.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.845 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.845 nvme0n1 : 1.02 13465.13 52.60 0.00 0.00 9496.89 5142.06 25306.98 00:13:49.845 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.845 nvme1n1 : 1.02 15460.33 60.39 0.00 0.00 8262.97 5268.09 20971.52 00:13:49.845 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.845 nvme2n1 : 1.02 13494.86 52.71 0.00 0.00 9403.86 4335.46 21072.34 00:13:49.845 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.845 nvme2n2 : 1.02 13422.61 52.43 0.00 0.00 9445.21 4562.31 20870.70 00:13:49.845 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.845 nvme2n3 : 1.02 13406.85 52.37 0.00 0.00 9450.38 4663.14 21072.34 00:13:49.845 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:49.845 nvme3n1 : 1.03 13340.62 52.11 0.00 0.00 9489.19 4763.96 25306.98 00:13:49.845 [2024-11-19T11:45:03.257Z] =================================================================================================================== 00:13:49.845 [2024-11-19T11:45:03.257Z] Total : 82590.40 322.62 0.00 0.00 9234.15 4335.46 25306.98 00:13:50.106 00:13:50.106 real 0m1.744s 00:13:50.106 user 0m1.067s 00:13:50.106 sys 0m0.496s 00:13:50.106 11:45:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.106 ************************************ 00:13:50.106 END TEST bdev_write_zeroes 00:13:50.106 ************************************ 00:13:50.106 11:45:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:50.106 11:45:03 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.106 11:45:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:50.106 11:45:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.106 11:45:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.106 ************************************ 00:13:50.106 START TEST bdev_json_nonenclosed 00:13:50.106 ************************************ 00:13:50.106 11:45:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.368 [2024-11-19 11:45:03.523226] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:50.368 [2024-11-19 11:45:03.523357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82041 ] 00:13:50.368 [2024-11-19 11:45:03.660399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.368 [2024-11-19 11:45:03.709590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.368 [2024-11-19 11:45:03.709699] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:50.368 [2024-11-19 11:45:03.709716] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:50.368 [2024-11-19 11:45:03.709728] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.716 00:13:50.716 real 0m0.357s 00:13:50.716 user 0m0.141s 00:13:50.716 sys 0m0.111s 00:13:50.716 11:45:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.716 ************************************ 00:13:50.716 END TEST bdev_json_nonenclosed 00:13:50.716 ************************************ 00:13:50.716 11:45:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:50.716 11:45:03 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.716 11:45:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:50.716 11:45:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.716 11:45:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.716 ************************************ 00:13:50.716 START TEST bdev_json_nonarray 00:13:50.716 ************************************ 00:13:50.716 11:45:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.716 [2024-11-19 11:45:03.954827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:50.716 [2024-11-19 11:45:03.954986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82061 ] 00:13:50.716 [2024-11-19 11:45:04.093011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.978 [2024-11-19 11:45:04.142377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.978 [2024-11-19 11:45:04.142526] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:50.978 [2024-11-19 11:45:04.142545] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:50.978 [2024-11-19 11:45:04.142557] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.978 00:13:50.978 real 0m0.369s 00:13:50.978 user 0m0.151s 00:13:50.978 sys 0m0.112s 00:13:50.978 ************************************ 00:13:50.978 END TEST bdev_json_nonarray 00:13:50.978 ************************************ 00:13:50.978 11:45:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.978 11:45:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:50.978 11:45:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:51.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:55.762 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.762 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.762 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.762 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:55.762 00:13:55.762 real 0m51.325s 00:13:55.762 user 1m17.438s 00:13:55.762 sys 0m31.306s 00:13:55.762 11:45:08 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.762 ************************************ 00:13:55.762 END TEST blockdev_xnvme 00:13:55.762 ************************************ 00:13:55.762 11:45:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.762 11:45:08 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:55.762 11:45:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:55.762 11:45:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.762 11:45:08 -- common/autotest_common.sh@10 -- # set +x 00:13:55.762 ************************************ 00:13:55.762 START TEST ublk 00:13:55.762 ************************************ 00:13:55.762 11:45:08 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:55.762 * Looking for test storage... 00:13:55.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:55.762 11:45:08 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:55.762 11:45:08 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:55.762 11:45:08 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:55.762 11:45:08 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:55.762 11:45:08 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.762 11:45:08 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.762 11:45:08 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.762 11:45:08 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.762 11:45:08 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.762 11:45:08 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.762 11:45:08 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.762 11:45:08 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.762 11:45:08 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.762 11:45:08 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.762 11:45:08 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.762 11:45:08 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:55.762 11:45:08 ublk -- scripts/common.sh@345 -- # : 1 00:13:55.762 11:45:08 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.762 11:45:08 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.762 11:45:08 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:55.762 11:45:08 ublk -- scripts/common.sh@353 -- # local d=1 00:13:55.762 11:45:08 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.762 11:45:08 ublk -- scripts/common.sh@355 -- # echo 1 00:13:55.762 11:45:08 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.763 11:45:08 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:55.763 11:45:08 ublk -- scripts/common.sh@353 -- # local d=2 00:13:55.763 11:45:08 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.763 11:45:08 ublk -- scripts/common.sh@355 -- # echo 2 00:13:55.763 11:45:08 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.763 11:45:08 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.763 11:45:08 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.763 11:45:08 ublk -- scripts/common.sh@368 -- # return 0 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.763 --rc genhtml_branch_coverage=1 00:13:55.763 --rc genhtml_function_coverage=1 00:13:55.763 --rc genhtml_legend=1 00:13:55.763 --rc geninfo_all_blocks=1 00:13:55.763 --rc geninfo_unexecuted_blocks=1 00:13:55.763 00:13:55.763 ' 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.763 --rc genhtml_branch_coverage=1 00:13:55.763 --rc genhtml_function_coverage=1 00:13:55.763 --rc genhtml_legend=1 00:13:55.763 --rc geninfo_all_blocks=1 00:13:55.763 --rc geninfo_unexecuted_blocks=1 00:13:55.763 00:13:55.763 ' 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.763 --rc genhtml_branch_coverage=1 00:13:55.763 --rc genhtml_function_coverage=1 00:13:55.763 --rc genhtml_legend=1 00:13:55.763 --rc geninfo_all_blocks=1 00:13:55.763 --rc geninfo_unexecuted_blocks=1 00:13:55.763 00:13:55.763 ' 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.763 --rc genhtml_branch_coverage=1 00:13:55.763 --rc genhtml_function_coverage=1 00:13:55.763 --rc genhtml_legend=1 00:13:55.763 --rc geninfo_all_blocks=1 00:13:55.763 --rc geninfo_unexecuted_blocks=1 00:13:55.763 00:13:55.763 ' 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:55.763 11:45:08 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:55.763 11:45:08 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:55.763 11:45:08 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:55.763 11:45:08 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:55.763 11:45:08 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:55.763 11:45:08 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:55.763 11:45:08 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:55.763 11:45:08 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:55.763 11:45:08 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.763 11:45:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.763 ************************************ 00:13:55.763 START TEST test_save_ublk_config 00:13:55.763 ************************************ 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:55.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82348 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82348 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82348 ']' 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.763 11:45:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:55.763 [2024-11-19 11:45:08.906802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:55.763 [2024-11-19 11:45:08.906942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82348 ] 00:13:55.763 [2024-11-19 11:45:09.042095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.763 [2024-11-19 11:45:09.100563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.706 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.706 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:56.706 11:45:09 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:56.706 11:45:09 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:56.706 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.706 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:56.706 [2024-11-19 11:45:09.770432] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:56.706 [2024-11-19 11:45:09.770789] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:56.706 malloc0 00:13:56.706 [2024-11-19 11:45:09.802548] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:56.706 [2024-11-19 11:45:09.802630] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:56.706 [2024-11-19 11:45:09.802638] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:56.706 [2024-11-19 11:45:09.802651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:56.706 [2024-11-19 11:45:09.811545] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:56.706 [2024-11-19 11:45:09.811586] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:56.706 [2024-11-19 11:45:09.818443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:56.707 [2024-11-19 11:45:09.818567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:56.707 [2024-11-19 11:45:09.835442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:56.707 0 00:13:56.707 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.707 11:45:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:56.707 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.707 11:45:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:56.968 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.968 11:45:10 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:56.968 "subsystems": [ 00:13:56.968 { 00:13:56.968 "subsystem": "fsdev", 00:13:56.968 "config": [ 00:13:56.968 { 00:13:56.968 "method": "fsdev_set_opts", 00:13:56.968 "params": { 00:13:56.968 "fsdev_io_pool_size": 65535, 00:13:56.968 "fsdev_io_cache_size": 256 00:13:56.968 } 00:13:56.968 } 00:13:56.968 ] 00:13:56.968 }, 00:13:56.968 { 00:13:56.968 "subsystem": "keyring", 00:13:56.968 "config": [] 00:13:56.968 }, 00:13:56.968 { 00:13:56.968 "subsystem": "iobuf", 00:13:56.968 "config": [ 00:13:56.968 { 00:13:56.968 "method": "iobuf_set_options", 00:13:56.968 "params": { 00:13:56.968 "small_pool_count": 8192, 00:13:56.968 "large_pool_count": 1024, 00:13:56.968 "small_bufsize": 8192, 00:13:56.968 "large_bufsize": 135168 00:13:56.968 } 00:13:56.968 } 00:13:56.968 ] 00:13:56.968 }, 00:13:56.968 { 00:13:56.968 "subsystem": "sock", 00:13:56.968 "config": [ 00:13:56.968 { 00:13:56.968 "method": "sock_set_default_impl", 00:13:56.968 "params": { 00:13:56.968 "impl_name": "posix" 00:13:56.968 } 00:13:56.968 }, 00:13:56.968 { 00:13:56.968 "method": "sock_impl_set_options", 00:13:56.968 "params": { 00:13:56.968 "impl_name": "ssl", 00:13:56.968 "recv_buf_size": 4096, 00:13:56.968 "send_buf_size": 4096, 00:13:56.968 "enable_recv_pipe": true, 00:13:56.968 "enable_quickack": false, 00:13:56.969 "enable_placement_id": 0, 00:13:56.969 "enable_zerocopy_send_server": true, 00:13:56.969 "enable_zerocopy_send_client": false, 00:13:56.969 "zerocopy_threshold": 0, 00:13:56.969 "tls_version": 0, 00:13:56.969 "enable_ktls": false 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "sock_impl_set_options", 00:13:56.969 "params": { 00:13:56.969 "impl_name": "posix", 00:13:56.969 "recv_buf_size": 2097152, 00:13:56.969 "send_buf_size": 2097152, 00:13:56.969 "enable_recv_pipe": true, 00:13:56.969 "enable_quickack": false, 00:13:56.969 "enable_placement_id": 0, 00:13:56.969 "enable_zerocopy_send_server": true, 00:13:56.969 "enable_zerocopy_send_client": false, 00:13:56.969 "zerocopy_threshold": 0, 00:13:56.969 "tls_version": 0, 00:13:56.969 "enable_ktls": false 00:13:56.969 } 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "vmd", 00:13:56.969 "config": [] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "accel", 00:13:56.969 "config": [ 00:13:56.969 { 00:13:56.969 "method": "accel_set_options", 00:13:56.969 "params": { 00:13:56.969 "small_cache_size": 128, 00:13:56.969 "large_cache_size": 16, 00:13:56.969 "task_count": 2048, 00:13:56.969 "sequence_count": 2048, 00:13:56.969 "buf_count": 2048 00:13:56.969 } 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "bdev", 00:13:56.969 "config": [ 00:13:56.969 { 00:13:56.969 "method": "bdev_set_options", 00:13:56.969 "params": { 00:13:56.969 "bdev_io_pool_size": 65535, 00:13:56.969 "bdev_io_cache_size": 256, 00:13:56.969 "bdev_auto_examine": true, 00:13:56.969 "iobuf_small_cache_size": 128, 00:13:56.969 "iobuf_large_cache_size": 16 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "bdev_raid_set_options", 00:13:56.969 "params": { 00:13:56.969 "process_window_size_kb": 1024, 00:13:56.969 "process_max_bandwidth_mb_sec": 0 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "bdev_iscsi_set_options", 00:13:56.969 "params": { 00:13:56.969 "timeout_sec": 30 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "bdev_nvme_set_options", 00:13:56.969 "params": { 00:13:56.969 "action_on_timeout": "none", 00:13:56.969 "timeout_us": 0, 00:13:56.969 "timeout_admin_us": 0, 00:13:56.969 "keep_alive_timeout_ms": 10000, 00:13:56.969 "arbitration_burst": 0, 00:13:56.969 "low_priority_weight": 0, 00:13:56.969 "medium_priority_weight": 0, 00:13:56.969 "high_priority_weight": 0, 00:13:56.969 "nvme_adminq_poll_period_us": 10000, 00:13:56.969 "nvme_ioq_poll_period_us": 0, 00:13:56.969 "io_queue_requests": 0, 00:13:56.969 "delay_cmd_submit": true, 00:13:56.969 "transport_retry_count": 4, 00:13:56.969 "bdev_retry_count": 3, 00:13:56.969 "transport_ack_timeout": 0, 00:13:56.969 "ctrlr_loss_timeout_sec": 0, 00:13:56.969 "reconnect_delay_sec": 0, 00:13:56.969 "fast_io_fail_timeout_sec": 0, 00:13:56.969 "disable_auto_failback": false, 00:13:56.969 "generate_uuids": false, 00:13:56.969 "transport_tos": 0, 00:13:56.969 "nvme_error_stat": false, 00:13:56.969 "rdma_srq_size": 0, 00:13:56.969 "io_path_stat": false, 00:13:56.969 "allow_accel_sequence": false, 00:13:56.969 "rdma_max_cq_size": 0, 00:13:56.969 "rdma_cm_event_timeout_ms": 0, 00:13:56.969 "dhchap_digests": [ 00:13:56.969 "sha256", 00:13:56.969 "sha384", 00:13:56.969 "sha512" 00:13:56.969 ], 00:13:56.969 "dhchap_dhgroups": [ 00:13:56.969 "null", 00:13:56.969 "ffdhe2048", 00:13:56.969 "ffdhe3072", 00:13:56.969 "ffdhe4096", 00:13:56.969 "ffdhe6144", 00:13:56.969 "ffdhe8192" 00:13:56.969 ] 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "bdev_nvme_set_hotplug", 00:13:56.969 "params": { 00:13:56.969 "period_us": 100000, 00:13:56.969 "enable": false 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "bdev_malloc_create", 00:13:56.969 "params": { 00:13:56.969 "name": "malloc0", 00:13:56.969 "num_blocks": 8192, 00:13:56.969 "block_size": 4096, 00:13:56.969 "physical_block_size": 4096, 00:13:56.969 "uuid": "aff6f07e-431c-4948-871b-ebd096e21f14", 00:13:56.969 "optimal_io_boundary": 0, 00:13:56.969 "md_size": 0, 00:13:56.969 "dif_type": 0, 00:13:56.969 "dif_is_head_of_md": false, 00:13:56.969 "dif_pi_format": 0 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "bdev_wait_for_examine" 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "scsi", 00:13:56.969 "config": null 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "scheduler", 00:13:56.969 "config": [ 00:13:56.969 { 00:13:56.969 "method": "framework_set_scheduler", 00:13:56.969 "params": { 00:13:56.969 "name": "static" 00:13:56.969 } 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "vhost_scsi", 00:13:56.969 "config": [] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "vhost_blk", 00:13:56.969 "config": [] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "ublk", 00:13:56.969 "config": [ 00:13:56.969 { 00:13:56.969 "method": "ublk_create_target", 00:13:56.969 "params": { 00:13:56.969 "cpumask": "1" 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "ublk_start_disk", 00:13:56.969 "params": { 00:13:56.969 "bdev_name": "malloc0", 00:13:56.969 "ublk_id": 0, 00:13:56.969 "num_queues": 1, 00:13:56.969 "queue_depth": 128 00:13:56.969 } 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "nbd", 00:13:56.969 "config": [] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "nvmf", 00:13:56.969 "config": [ 00:13:56.969 { 00:13:56.969 "method": "nvmf_set_config", 00:13:56.969 "params": { 00:13:56.969 "discovery_filter": "match_any", 00:13:56.969 "admin_cmd_passthru": { 00:13:56.969 "identify_ctrlr": false 00:13:56.969 }, 00:13:56.969 "dhchap_digests": [ 00:13:56.969 "sha256", 00:13:56.969 "sha384", 00:13:56.969 "sha512" 00:13:56.969 ], 00:13:56.969 "dhchap_dhgroups": [ 00:13:56.969 "null", 00:13:56.969 "ffdhe2048", 00:13:56.969 "ffdhe3072", 00:13:56.969 "ffdhe4096", 00:13:56.969 "ffdhe6144", 00:13:56.969 "ffdhe8192" 00:13:56.969 ] 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "nvmf_set_max_subsystems", 00:13:56.969 "params": { 00:13:56.969 "max_subsystems": 1024 00:13:56.969 } 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "method": "nvmf_set_crdt", 00:13:56.969 "params": { 00:13:56.969 "crdt1": 0, 00:13:56.969 "crdt2": 0, 00:13:56.969 "crdt3": 0 00:13:56.969 } 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }, 00:13:56.969 { 00:13:56.969 "subsystem": "iscsi", 00:13:56.969 "config": [ 00:13:56.969 { 00:13:56.969 "method": "iscsi_set_options", 00:13:56.969 "params": { 00:13:56.969 "node_base": "iqn.2016-06.io.spdk", 00:13:56.969 "max_sessions": 128, 00:13:56.969 "max_connections_per_session": 2, 00:13:56.969 "max_queue_depth": 64, 00:13:56.969 "default_time2wait": 2, 00:13:56.969 "default_time2retain": 20, 00:13:56.969 "first_burst_length": 8192, 00:13:56.969 "immediate_data": true, 00:13:56.969 "allow_duplicated_isid": false, 00:13:56.969 "error_recovery_level": 0, 00:13:56.969 "nop_timeout": 60, 00:13:56.969 "nop_in_interval": 30, 00:13:56.969 "disable_chap": false, 00:13:56.969 "require_chap": false, 00:13:56.969 "mutual_chap": false, 00:13:56.969 "chap_group": 0, 00:13:56.969 "max_large_datain_per_connection": 64, 00:13:56.969 "max_r2t_per_connection": 4, 00:13:56.969 "pdu_pool_size": 36864, 00:13:56.969 "immediate_data_pool_size": 16384, 00:13:56.969 "data_out_pool_size": 2048 00:13:56.969 } 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 } 00:13:56.969 ] 00:13:56.969 }' 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82348 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82348 ']' 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82348 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82348 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.969 killing process with pid 82348 00:13:56.969 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82348' 00:13:56.970 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82348 00:13:56.970 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82348 00:13:57.231 [2024-11-19 11:45:10.435678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:57.231 [2024-11-19 11:45:10.466541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:57.231 [2024-11-19 11:45:10.466689] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:57.231 [2024-11-19 11:45:10.473443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:57.231 [2024-11-19 11:45:10.473508] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:57.231 [2024-11-19 11:45:10.473516] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:57.231 [2024-11-19 11:45:10.473553] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:57.231 [2024-11-19 11:45:10.473703] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82386 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82386 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82386 ']' 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:57.803 11:45:10 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:57.803 "subsystems": [ 00:13:57.803 { 00:13:57.803 "subsystem": "fsdev", 00:13:57.803 "config": [ 00:13:57.803 { 00:13:57.803 "method": "fsdev_set_opts", 00:13:57.803 "params": { 00:13:57.803 "fsdev_io_pool_size": 65535, 00:13:57.803 "fsdev_io_cache_size": 256 00:13:57.803 } 00:13:57.803 } 00:13:57.803 ] 00:13:57.803 }, 00:13:57.803 { 00:13:57.803 "subsystem": "keyring", 00:13:57.803 "config": [] 00:13:57.803 }, 00:13:57.803 { 00:13:57.803 "subsystem": "iobuf", 00:13:57.803 "config": [ 00:13:57.803 { 00:13:57.803 "method": "iobuf_set_options", 00:13:57.803 "params": { 00:13:57.803 "small_pool_count": 8192, 00:13:57.803 "large_pool_count": 1024, 00:13:57.803 "small_bufsize": 8192, 00:13:57.803 "large_bufsize": 135168 00:13:57.803 } 00:13:57.803 } 00:13:57.803 ] 00:13:57.803 }, 00:13:57.803 { 00:13:57.803 "subsystem": "sock", 00:13:57.803 "config": [ 00:13:57.803 { 00:13:57.803 "method": "sock_set_default_impl", 00:13:57.803 "params": { 00:13:57.803 "impl_name": "posix" 00:13:57.803 } 00:13:57.803 }, 00:13:57.803 { 00:13:57.803 "method": "sock_impl_set_options", 00:13:57.803 "params": { 00:13:57.803 "impl_name": "ssl", 00:13:57.803 "recv_buf_size": 4096, 00:13:57.803 "send_buf_size": 4096, 00:13:57.803 "enable_recv_pipe": true, 00:13:57.803 "enable_quickack": false, 00:13:57.803 "enable_placement_id": 0, 00:13:57.804 "enable_zerocopy_send_server": true, 00:13:57.804 "enable_zerocopy_send_client": false, 00:13:57.804 "zerocopy_threshold": 0, 00:13:57.804 "tls_version": 0, 00:13:57.804 "enable_ktls": false 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "sock_impl_set_options", 00:13:57.804 "params": { 00:13:57.804 "impl_name": "posix", 00:13:57.804 "recv_buf_size": 2097152, 00:13:57.804 "send_buf_size": 2097152, 00:13:57.804 "enable_recv_pipe": true, 00:13:57.804 "enable_quickack": false, 00:13:57.804 "enable_placement_id": 0, 00:13:57.804 "enable_zerocopy_send_server": true, 00:13:57.804 "enable_zerocopy_send_client": false, 00:13:57.804 "zerocopy_threshold": 0, 00:13:57.804 "tls_version": 0, 00:13:57.804 "enable_ktls": false 00:13:57.804 } 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "vmd", 00:13:57.804 "config": [] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "accel", 00:13:57.804 "config": [ 00:13:57.804 { 00:13:57.804 "method": "accel_set_options", 00:13:57.804 "params": { 00:13:57.804 "small_cache_size": 128, 00:13:57.804 "large_cache_size": 16, 00:13:57.804 "task_count": 2048, 00:13:57.804 "sequence_count": 2048, 00:13:57.804 "buf_count": 2048 00:13:57.804 } 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "bdev", 00:13:57.804 "config": [ 00:13:57.804 { 00:13:57.804 "method": "bdev_set_options", 00:13:57.804 "params": { 00:13:57.804 "bdev_io_pool_size": 65535, 00:13:57.804 "bdev_io_cache_size": 256, 00:13:57.804 "bdev_auto_examine": true, 00:13:57.804 "iobuf_small_cache_size": 128, 00:13:57.804 "iobuf_large_cache_size": 16 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "bdev_raid_set_options", 00:13:57.804 "params": { 00:13:57.804 "process_window_size_kb": 1024, 00:13:57.804 "process_max_bandwidth_mb_sec": 0 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "bdev_iscsi_set_options", 00:13:57.804 "params": { 00:13:57.804 "timeout_sec": 30 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "bdev_nvme_set_options", 00:13:57.804 "params": { 00:13:57.804 "action_on_timeout": "none", 00:13:57.804 "timeout_us": 0, 00:13:57.804 "timeout_admin_us": 0, 00:13:57.804 "keep_alive_timeout_ms": 10000, 00:13:57.804 "arbitration_burst": 0, 00:13:57.804 "low_priority_weight": 0, 00:13:57.804 "medium_priority_weight": 0, 00:13:57.804 "high_priority_weight": 0, 00:13:57.804 "nvme_adminq_poll_period_us": 10000, 00:13:57.804 "nvme_ioq_poll_period_us": 0, 00:13:57.804 "io_queue_requests": 0, 00:13:57.804 "delay_cmd_submit": true, 00:13:57.804 "transport_retry_count": 4, 00:13:57.804 "bdev_retry_count": 3, 00:13:57.804 "transport_ack_timeout": 0, 00:13:57.804 "ctrlr_loss_timeout_sec": 0, 00:13:57.804 "reconnect_delay_sec": 0, 00:13:57.804 "fast_io_fail_timeout_sec": 0, 00:13:57.804 "disable_auto_failback": false, 00:13:57.804 "generate_uuids": false, 00:13:57.804 "transport_tos": 0, 00:13:57.804 "nvme_error_stat": false, 00:13:57.804 "rdma_srq_size": 0, 00:13:57.804 "io_path_stat": false, 00:13:57.804 "allow_accel_sequence": false, 00:13:57.804 "rdma_max_cq_size": 0, 00:13:57.804 "rdma_cm_event_timeout_ms": 0, 00:13:57.804 "dhchap_digests": [ 00:13:57.804 "sha256", 00:13:57.804 "sha384", 00:13:57.804 "sha512" 00:13:57.804 ], 00:13:57.804 "dhchap_dhgroups": [ 00:13:57.804 "null", 00:13:57.804 "ffdhe2048", 00:13:57.804 "ffdhe3072", 00:13:57.804 "ffdhe4096", 00:13:57.804 "ffdhe6144", 00:13:57.804 "ffdhe8192" 00:13:57.804 ] 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "bdev_nvme_set_hotplug", 00:13:57.804 "params": { 00:13:57.804 "period_us": 100000, 00:13:57.804 "enable": false 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "bdev_malloc_create", 00:13:57.804 "params": { 00:13:57.804 "name": "malloc0", 00:13:57.804 "num_blocks": 8192, 00:13:57.804 "block_size": 4096, 00:13:57.804 "physical_block_size": 4096, 00:13:57.804 "uuid": "aff6f07e-431c-4948-871b-ebd096e21f14", 00:13:57.804 "optimal_io_boundary": 0, 00:13:57.804 "md_size": 0, 00:13:57.804 "dif_type": 0, 00:13:57.804 "dif_is_head_of_md": false, 00:13:57.804 "dif_pi_format": 0 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "bdev_wait_for_examine" 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "scsi", 00:13:57.804 "config": null 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "scheduler", 00:13:57.804 "config": [ 00:13:57.804 { 00:13:57.804 "method": "framework_set_scheduler", 00:13:57.804 "params": { 00:13:57.804 "name": "static" 00:13:57.804 } 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "vhost_scsi", 00:13:57.804 "config": [] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "vhost_blk", 00:13:57.804 "config": [] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "ublk", 00:13:57.804 "config": [ 00:13:57.804 { 00:13:57.804 "method": "ublk_create_target", 00:13:57.804 "params": { 00:13:57.804 "cpumask": "1" 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "ublk_start_disk", 00:13:57.804 "params": { 00:13:57.804 "bdev_name": "malloc0", 00:13:57.804 "ublk_id": 0, 00:13:57.804 "num_queues": 1, 00:13:57.804 "queue_depth": 128 00:13:57.804 } 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "nbd", 00:13:57.804 "config": [] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "nvmf", 00:13:57.804 "config": [ 00:13:57.804 { 00:13:57.804 "method": "nvmf_set_config", 00:13:57.804 "params": { 00:13:57.804 "discovery_filter": "match_any", 00:13:57.804 "admin_cmd_passthru": { 00:13:57.804 "identify_ctrlr": false 00:13:57.804 }, 00:13:57.804 "dhchap_digests": [ 00:13:57.804 "sha256", 00:13:57.804 "sha384", 00:13:57.804 "sha512" 00:13:57.804 ], 00:13:57.804 "dhchap_dhgroups": [ 00:13:57.804 "null", 00:13:57.804 "ffdhe2048", 00:13:57.804 "ffdhe3072", 00:13:57.804 "ffdhe4096", 00:13:57.804 "ffdhe6144", 00:13:57.804 "ffdhe8192" 00:13:57.804 ] 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "nvmf_set_max_subsystems", 00:13:57.804 "params": { 00:13:57.804 "max_subsystems": 1024 00:13:57.804 } 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "method": "nvmf_set_crdt", 00:13:57.804 "params": { 00:13:57.804 "crdt1": 0, 00:13:57.804 "crdt2": 0, 00:13:57.804 "crdt3": 0 00:13:57.804 } 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }, 00:13:57.804 { 00:13:57.804 "subsystem": "iscsi", 00:13:57.804 "config": [ 00:13:57.804 { 00:13:57.804 "method": "iscsi_set_options", 00:13:57.804 "params": { 00:13:57.804 "node_base": "iqn.2016-06.io.spdk", 00:13:57.804 "max_sessions": 128, 00:13:57.804 "max_connections_per_session": 2, 00:13:57.804 "max_queue_depth": 64, 00:13:57.804 "default_time2wait": 2, 00:13:57.804 "default_time2retain": 20, 00:13:57.804 "first_burst_length": 8192, 00:13:57.804 "immediate_data": true, 00:13:57.804 "allow_duplicated_isid": false, 00:13:57.804 "error_recovery_level": 0, 00:13:57.804 "nop_timeout": 60, 00:13:57.804 "nop_in_interval": 30, 00:13:57.804 "disable_chap": false, 00:13:57.804 "require_chap": false, 00:13:57.804 "mutual_chap": false, 00:13:57.804 "chap_group": 0, 00:13:57.804 "max_large_datain_per_connection": 64, 00:13:57.804 "max_r2t_per_connection": 4, 00:13:57.804 "pdu_pool_size": 36864, 00:13:57.804 "immediate_data_pool_size": 16384, 00:13:57.804 "data_out_pool_size": 2048 00:13:57.804 } 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 } 00:13:57.804 ] 00:13:57.804 }' 00:13:57.804 [2024-11-19 11:45:11.043825] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:57.804 [2024-11-19 11:45:11.043980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82386 ] 00:13:57.804 [2024-11-19 11:45:11.183374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.065 [2024-11-19 11:45:11.242609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.326 [2024-11-19 11:45:11.622431] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:58.326 [2024-11-19 11:45:11.622821] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:58.326 [2024-11-19 11:45:11.630580] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:58.326 [2024-11-19 11:45:11.630662] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:58.326 [2024-11-19 11:45:11.630671] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:58.326 [2024-11-19 11:45:11.630678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:58.326 [2024-11-19 11:45:11.639538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:58.326 [2024-11-19 11:45:11.639567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:58.326 [2024-11-19 11:45:11.646450] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:58.326 [2024-11-19 11:45:11.646566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:58.326 [2024-11-19 11:45:11.663435] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82386 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82386 ']' 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82386 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82386 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:58.588 killing process with pid 82386 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82386' 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82386 00:13:58.588 11:45:11 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82386 00:13:58.849 [2024-11-19 11:45:12.253908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:59.110 [2024-11-19 11:45:12.289457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:59.110 [2024-11-19 11:45:12.289615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:59.110 [2024-11-19 11:45:12.300428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:59.110 [2024-11-19 11:45:12.300498] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:59.110 [2024-11-19 11:45:12.300507] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:59.110 [2024-11-19 11:45:12.300552] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:59.110 [2024-11-19 11:45:12.300705] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:59.683 11:45:12 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:59.683 00:13:59.683 real 0m3.981s 00:13:59.683 user 0m2.689s 00:13:59.683 sys 0m1.972s 00:13:59.683 11:45:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.683 ************************************ 00:13:59.683 END TEST test_save_ublk_config 00:13:59.683 ************************************ 00:13:59.683 11:45:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:59.683 11:45:12 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82442 00:13:59.683 11:45:12 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.683 11:45:12 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82442 00:13:59.683 11:45:12 ublk -- common/autotest_common.sh@831 -- # '[' -z 82442 ']' 00:13:59.683 11:45:12 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.683 11:45:12 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.683 11:45:12 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.683 11:45:12 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.683 11:45:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:59.683 11:45:12 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:59.683 [2024-11-19 11:45:12.935332] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:59.683 [2024-11-19 11:45:12.935514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82442 ] 00:13:59.683 [2024-11-19 11:45:13.073053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.944 [2024-11-19 11:45:13.124200] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.944 [2024-11-19 11:45:13.124303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.518 11:45:13 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.518 11:45:13 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:00.518 11:45:13 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:00.518 11:45:13 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:00.518 11:45:13 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.518 11:45:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.518 ************************************ 00:14:00.518 START TEST test_create_ublk 00:14:00.518 ************************************ 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:00.518 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.518 [2024-11-19 11:45:13.813437] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:00.518 [2024-11-19 11:45:13.815111] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.518 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:00.518 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.518 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:00.518 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.518 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.518 [2024-11-19 11:45:13.917598] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:00.518 [2024-11-19 11:45:13.918054] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:00.518 [2024-11-19 11:45:13.918072] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:00.518 [2024-11-19 11:45:13.918093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:00.518 [2024-11-19 11:45:13.926733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:00.518 [2024-11-19 11:45:13.926772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:00.780 [2024-11-19 11:45:13.933456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:00.780 [2024-11-19 11:45:13.934163] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:00.780 [2024-11-19 11:45:13.961467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:00.780 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.780 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:00.780 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:00.780 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:00.780 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.780 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:00.780 11:45:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.780 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:00.780 { 00:14:00.780 "ublk_device": "/dev/ublkb0", 00:14:00.780 "id": 0, 00:14:00.780 "queue_depth": 512, 00:14:00.780 "num_queues": 4, 00:14:00.780 "bdev_name": "Malloc0" 00:14:00.780 } 00:14:00.780 ]' 00:14:00.780 11:45:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:00.780 11:45:14 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:00.780 11:45:14 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:01.042 fio: verification read phase will never start because write phase uses all of runtime 00:14:01.042 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:01.042 fio-3.35 00:14:01.042 Starting 1 process 00:14:11.020 00:14:11.020 fio_test: (groupid=0, jobs=1): err= 0: pid=82487: Tue Nov 19 11:45:24 2024 00:14:11.020 write: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(541MiB/10001msec); 0 zone resets 00:14:11.020 clat (usec): min=34, max=5624, avg=71.37, stdev=143.59 00:14:11.020 lat (usec): min=35, max=5628, avg=71.86, stdev=143.67 00:14:11.020 clat percentiles (usec): 00:14:11.020 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 59], 00:14:11.020 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:14:11.020 | 70.00th=[ 65], 80.00th=[ 68], 90.00th=[ 72], 95.00th=[ 78], 00:14:11.020 | 99.00th=[ 153], 99.50th=[ 262], 99.90th=[ 3359], 99.95th=[ 3785], 00:14:11.020 | 99.99th=[ 4228] 00:14:11.020 bw ( KiB/s): min= 9632, max=60984, per=99.60%, avg=55158.32, stdev=14613.53, samples=19 00:14:11.020 iops : min= 2408, max=15246, avg=13789.58, stdev=3653.38, samples=19 00:14:11.020 lat (usec) : 50=0.02%, 100=98.17%, 250=1.21%, 500=0.36%, 750=0.01% 00:14:11.020 lat (usec) : 1000=0.01% 00:14:11.020 lat (msec) : 2=0.05%, 4=0.14%, 10=0.03% 00:14:11.021 cpu : usr=2.46%, sys=10.93%, ctx=138464, majf=0, minf=796 00:14:11.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:11.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:11.021 issued rwts: total=0,138463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:11.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:11.021 00:14:11.021 Run status group 0 (all jobs): 00:14:11.021 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=541MiB (567MB), run=10001-10001msec 00:14:11.021 00:14:11.021 Disk stats (read/write): 00:14:11.021 ublkb0: ios=0/136880, merge=0/0, ticks=0/8537, in_queue=8537, util=99.08% 00:14:11.021 11:45:24 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:11.021 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.021 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.021 [2024-11-19 11:45:24.400611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:11.280 [2024-11-19 11:45:24.445460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:11.280 [2024-11-19 11:45:24.446130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:11.280 [2024-11-19 11:45:24.453436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:11.280 [2024-11-19 11:45:24.453695] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:11.280 [2024-11-19 11:45:24.453705] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 [2024-11-19 11:45:24.469489] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:11.280 request: 00:14:11.280 { 00:14:11.280 "ublk_id": 0, 00:14:11.280 "method": "ublk_stop_disk", 00:14:11.280 "req_id": 1 00:14:11.280 } 00:14:11.280 Got JSON-RPC error response 00:14:11.280 response: 00:14:11.280 { 00:14:11.280 "code": -19, 00:14:11.280 "message": "No such device" 00:14:11.280 } 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.280 11:45:24 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 [2024-11-19 11:45:24.485494] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:11.280 [2024-11-19 11:45:24.487312] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:11.280 [2024-11-19 11:45:24.487343] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:11.280 11:45:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:11.280 00:14:11.280 real 0m10.838s 00:14:11.280 user 0m0.554s 00:14:11.280 sys 0m1.177s 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.280 ************************************ 00:14:11.280 END TEST test_create_ublk 00:14:11.280 ************************************ 00:14:11.280 11:45:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 11:45:24 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:11.280 11:45:24 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.280 11:45:24 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.280 11:45:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.280 ************************************ 00:14:11.280 START TEST test_create_multi_ublk 00:14:11.280 ************************************ 00:14:11.280 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:11.280 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:11.280 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.280 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.539 [2024-11-19 11:45:24.696425] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:11.539 [2024-11-19 11:45:24.697279] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.539 [2024-11-19 11:45:24.776547] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:11.539 [2024-11-19 11:45:24.776842] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:11.539 [2024-11-19 11:45:24.776854] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:11.539 [2024-11-19 11:45:24.776859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.539 [2024-11-19 11:45:24.788483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.539 [2024-11-19 11:45:24.788501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.539 [2024-11-19 11:45:24.800436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.539 [2024-11-19 11:45:24.800937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:11.539 [2024-11-19 11:45:24.813754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:11.539 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.540 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.540 [2024-11-19 11:45:24.895532] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:11.540 [2024-11-19 11:45:24.895829] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:11.540 [2024-11-19 11:45:24.895840] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:11.540 [2024-11-19 11:45:24.895847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.540 [2024-11-19 11:45:24.907440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.540 [2024-11-19 11:45:24.907458] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.540 [2024-11-19 11:45:24.919433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.540 [2024-11-19 11:45:24.919914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:11.540 [2024-11-19 11:45:24.944443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.799 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.799 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:11.799 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.799 11:45:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:11.799 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.799 11:45:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.799 [2024-11-19 11:45:25.027526] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:11.799 [2024-11-19 11:45:25.027825] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:11.799 [2024-11-19 11:45:25.027838] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:11.799 [2024-11-19 11:45:25.027843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.799 [2024-11-19 11:45:25.039461] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.799 [2024-11-19 11:45:25.039477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.799 [2024-11-19 11:45:25.051434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.799 [2024-11-19 11:45:25.051909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:11.799 [2024-11-19 11:45:25.076453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.799 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:11.799 [2024-11-19 11:45:25.159528] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:11.799 [2024-11-19 11:45:25.159836] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:11.799 [2024-11-19 11:45:25.159847] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:11.799 [2024-11-19 11:45:25.159854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.799 [2024-11-19 11:45:25.171443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.799 [2024-11-19 11:45:25.171463] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.799 [2024-11-19 11:45:25.183437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.799 [2024-11-19 11:45:25.183950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:12.058 [2024-11-19 11:45:25.211433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:12.058 { 00:14:12.058 "ublk_device": "/dev/ublkb0", 00:14:12.058 "id": 0, 00:14:12.058 "queue_depth": 512, 00:14:12.058 "num_queues": 4, 00:14:12.058 "bdev_name": "Malloc0" 00:14:12.058 }, 00:14:12.058 { 00:14:12.058 "ublk_device": "/dev/ublkb1", 00:14:12.058 "id": 1, 00:14:12.058 "queue_depth": 512, 00:14:12.058 "num_queues": 4, 00:14:12.058 "bdev_name": "Malloc1" 00:14:12.058 }, 00:14:12.058 { 00:14:12.058 "ublk_device": "/dev/ublkb2", 00:14:12.058 "id": 2, 00:14:12.058 "queue_depth": 512, 00:14:12.058 "num_queues": 4, 00:14:12.058 "bdev_name": "Malloc2" 00:14:12.058 }, 00:14:12.058 { 00:14:12.058 "ublk_device": "/dev/ublkb3", 00:14:12.058 "id": 3, 00:14:12.058 "queue_depth": 512, 00:14:12.058 "num_queues": 4, 00:14:12.058 "bdev_name": "Malloc3" 00:14:12.058 } 00:14:12.058 ]' 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:12.058 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:12.059 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.059 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:12.059 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:12.059 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:12.317 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:12.318 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:12.318 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:12.318 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:12.318 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:12.318 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:12.318 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.576 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.576 [2024-11-19 11:45:25.923510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.576 [2024-11-19 11:45:25.966423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.576 [2024-11-19 11:45:25.967319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.576 [2024-11-19 11:45:25.971426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.576 [2024-11-19 11:45:25.971702] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:12.577 [2024-11-19 11:45:25.971714] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:12.577 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.577 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.577 11:45:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:12.577 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.577 11:45:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.577 [2024-11-19 11:45:25.979478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.836 [2024-11-19 11:45:26.011473] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.836 [2024-11-19 11:45:26.012273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.836 [2024-11-19 11:45:26.019434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.836 [2024-11-19 11:45:26.019699] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:12.836 [2024-11-19 11:45:26.019711] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.836 [2024-11-19 11:45:26.035503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.836 [2024-11-19 11:45:26.064833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.836 [2024-11-19 11:45:26.065949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.836 [2024-11-19 11:45:26.075430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.836 [2024-11-19 11:45:26.075672] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:12.836 [2024-11-19 11:45:26.075682] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:12.836 [2024-11-19 11:45:26.091505] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.836 [2024-11-19 11:45:26.123458] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.836 [2024-11-19 11:45:26.124126] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.836 [2024-11-19 11:45:26.131444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.836 [2024-11-19 11:45:26.131685] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:12.836 [2024-11-19 11:45:26.131695] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.836 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:13.095 [2024-11-19 11:45:26.323480] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:13.095 [2024-11-19 11:45:26.324844] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:13.095 [2024-11-19 11:45:26.324874] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.095 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.353 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.354 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:13.354 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:13.354 11:45:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:13.354 00:14:13.354 real 0m1.980s 00:14:13.354 user 0m0.824s 00:14:13.354 sys 0m0.161s 00:14:13.354 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.354 11:45:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.354 ************************************ 00:14:13.354 END TEST test_create_multi_ublk 00:14:13.354 ************************************ 00:14:13.354 11:45:26 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:13.354 11:45:26 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:13.354 11:45:26 ublk -- ublk/ublk.sh@130 -- # killprocess 82442 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@950 -- # '[' -z 82442 ']' 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@954 -- # kill -0 82442 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@955 -- # uname 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82442 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82442' 00:14:13.354 killing process with pid 82442 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@969 -- # kill 82442 00:14:13.354 11:45:26 ublk -- common/autotest_common.sh@974 -- # wait 82442 00:14:13.613 [2024-11-19 11:45:26.870244] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:13.613 [2024-11-19 11:45:26.870305] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:13.875 00:14:13.875 real 0m18.529s 00:14:13.875 user 0m28.459s 00:14:13.875 sys 0m7.574s 00:14:13.875 11:45:27 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.875 ************************************ 00:14:13.875 END TEST ublk 00:14:13.875 ************************************ 00:14:13.875 11:45:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 11:45:27 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:13.875 11:45:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.875 11:45:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.875 11:45:27 -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 ************************************ 00:14:13.875 START TEST ublk_recovery 00:14:13.875 ************************************ 00:14:13.875 11:45:27 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:13.875 * Looking for test storage... 00:14:13.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:13.875 11:45:27 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:13.875 11:45:27 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:13.875 11:45:27 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.137 11:45:27 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:14.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.137 --rc genhtml_branch_coverage=1 00:14:14.137 --rc genhtml_function_coverage=1 00:14:14.137 --rc genhtml_legend=1 00:14:14.137 --rc geninfo_all_blocks=1 00:14:14.137 --rc geninfo_unexecuted_blocks=1 00:14:14.137 00:14:14.137 ' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:14.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.137 --rc genhtml_branch_coverage=1 00:14:14.137 --rc genhtml_function_coverage=1 00:14:14.137 --rc genhtml_legend=1 00:14:14.137 --rc geninfo_all_blocks=1 00:14:14.137 --rc geninfo_unexecuted_blocks=1 00:14:14.137 00:14:14.137 ' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:14.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.137 --rc genhtml_branch_coverage=1 00:14:14.137 --rc genhtml_function_coverage=1 00:14:14.137 --rc genhtml_legend=1 00:14:14.137 --rc geninfo_all_blocks=1 00:14:14.137 --rc geninfo_unexecuted_blocks=1 00:14:14.137 00:14:14.137 ' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:14.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.137 --rc genhtml_branch_coverage=1 00:14:14.137 --rc genhtml_function_coverage=1 00:14:14.137 --rc genhtml_legend=1 00:14:14.137 --rc geninfo_all_blocks=1 00:14:14.137 --rc geninfo_unexecuted_blocks=1 00:14:14.137 00:14:14.137 ' 00:14:14.137 11:45:27 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:14.137 11:45:27 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:14.137 11:45:27 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:14.137 11:45:27 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82804 00:14:14.137 11:45:27 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.137 11:45:27 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:14.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.137 11:45:27 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82804 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82804 ']' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.137 11:45:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.137 [2024-11-19 11:45:27.436295] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:14.137 [2024-11-19 11:45:27.436403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82804 ] 00:14:14.399 [2024-11-19 11:45:27.566624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:14.399 [2024-11-19 11:45:27.598345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.399 [2024-11-19 11:45:27.598426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:14.972 11:45:28 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.972 [2024-11-19 11:45:28.279422] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.972 [2024-11-19 11:45:28.280358] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.972 11:45:28 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.972 malloc0 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.972 11:45:28 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:14.972 [2024-11-19 11:45:28.311517] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:14.972 [2024-11-19 11:45:28.311604] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:14.972 [2024-11-19 11:45:28.311611] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:14.972 [2024-11-19 11:45:28.311619] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.972 [2024-11-19 11:45:28.320492] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.972 [2024-11-19 11:45:28.320514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:14.972 [2024-11-19 11:45:28.327428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:14.972 [2024-11-19 11:45:28.327533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:14.972 [2024-11-19 11:45:28.337430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:14.972 1 00:14:14.972 11:45:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.972 11:45:28 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:16.379 11:45:29 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82837 00:14:16.379 11:45:29 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:16.379 11:45:29 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:16.379 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.379 fio-3.35 00:14:16.379 Starting 1 process 00:14:21.671 11:45:34 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82804 00:14:21.671 11:45:34 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:26.975 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82804 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:26.975 11:45:39 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82948 00:14:26.975 11:45:39 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.975 11:45:39 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82948 00:14:26.975 11:45:39 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:26.975 11:45:39 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 82948 ']' 00:14:26.975 11:45:39 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.975 11:45:39 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.975 11:45:39 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.975 11:45:39 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.975 11:45:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.975 [2024-11-19 11:45:39.444795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:26.975 [2024-11-19 11:45:39.445183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82948 ] 00:14:26.975 [2024-11-19 11:45:39.580196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:26.975 [2024-11-19 11:45:39.655464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.975 [2024-11-19 11:45:39.655503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:26.975 11:45:40 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.975 [2024-11-19 11:45:40.324442] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:26.975 [2024-11-19 11:45:40.326736] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.975 11:45:40 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:26.975 malloc0 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.975 11:45:40 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.975 11:45:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:27.236 [2024-11-19 11:45:40.388624] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:27.236 [2024-11-19 11:45:40.388685] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:27.236 [2024-11-19 11:45:40.388696] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:27.236 [2024-11-19 11:45:40.396513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:27.236 [2024-11-19 11:45:40.396541] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:27.236 1 00:14:27.236 11:45:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.236 11:45:40 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82837 00:14:28.182 [2024-11-19 11:45:41.396599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:28.182 [2024-11-19 11:45:41.404432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:28.182 [2024-11-19 11:45:41.404454] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:29.135 [2024-11-19 11:45:42.404479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:29.135 [2024-11-19 11:45:42.408441] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:29.136 [2024-11-19 11:45:42.408453] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.070 [2024-11-19 11:45:43.408482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:30.071 [2024-11-19 11:45:43.416437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:30.071 [2024-11-19 11:45:43.416455] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:30.071 [2024-11-19 11:45:43.416462] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:30.071 [2024-11-19 11:45:43.416527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:51.995 [2024-11-19 11:46:04.606437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:51.996 [2024-11-19 11:46:04.613064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:51.996 [2024-11-19 11:46:04.620646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:51.996 [2024-11-19 11:46:04.620735] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:18.548 00:15:18.548 fio_test: (groupid=0, jobs=1): err= 0: pid=82840: Tue Nov 19 11:46:29 2024 00:15:18.548 read: IOPS=14.1k, BW=55.2MiB/s (57.9MB/s)(3313MiB/60002msec) 00:15:18.548 slat (nsec): min=1003, max=155896, avg=5474.21, stdev=1528.00 00:15:18.548 clat (usec): min=710, max=30278k, avg=4696.37, stdev=275046.84 00:15:18.548 lat (usec): min=715, max=30278k, avg=4701.84, stdev=275046.84 00:15:18.548 clat percentiles (usec): 00:15:18.548 | 1.00th=[ 1745], 5.00th=[ 1860], 10.00th=[ 1909], 20.00th=[ 1991], 00:15:18.548 | 30.00th=[ 2040], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:15:18.548 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3163], 00:15:18.548 | 99.00th=[ 5145], 99.50th=[ 5538], 99.90th=[ 7701], 99.95th=[ 9110], 00:15:18.548 | 99.99th=[13173] 00:15:18.548 bw ( KiB/s): min=54008, max=124664, per=100.00%, avg=113225.36, stdev=12558.64, samples=59 00:15:18.548 iops : min=13502, max=31166, avg=28306.34, stdev=3139.66, samples=59 00:15:18.548 write: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(3308MiB/60002msec); 0 zone resets 00:15:18.548 slat (nsec): min=1216, max=230665, avg=5685.95, stdev=1640.26 00:15:18.548 clat (usec): min=603, max=30279k, avg=4353.39, stdev=250526.84 00:15:18.548 lat (usec): min=608, max=30279k, avg=4359.08, stdev=250526.84 00:15:18.548 clat percentiles (usec): 00:15:18.548 | 1.00th=[ 1762], 5.00th=[ 1942], 10.00th=[ 1991], 20.00th=[ 2073], 00:15:18.548 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2180], 60.00th=[ 2212], 00:15:18.548 | 70.00th=[ 2245], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3097], 00:15:18.548 | 99.00th=[ 5145], 99.50th=[ 5604], 99.90th=[ 7701], 99.95th=[11863], 00:15:18.548 | 99.99th=[13304] 00:15:18.548 bw ( KiB/s): min=53792, max=125312, per=100.00%, avg=113076.47, stdev=12683.93, samples=59 00:15:18.548 iops : min=13448, max=31328, avg=28269.12, stdev=3170.98, samples=59 00:15:18.548 lat (usec) : 750=0.01%, 1000=0.01% 00:15:18.548 lat (msec) : 2=16.32%, 4=80.80%, 10=2.83%, 20=0.04%, >=2000=0.01% 00:15:18.548 cpu : usr=3.22%, sys=16.17%, ctx=53961, majf=0, minf=13 00:15:18.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:18.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.548 issued rwts: total=848036,846955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.548 00:15:18.548 Run status group 0 (all jobs): 00:15:18.548 READ: bw=55.2MiB/s (57.9MB/s), 55.2MiB/s-55.2MiB/s (57.9MB/s-57.9MB/s), io=3313MiB (3474MB), run=60002-60002msec 00:15:18.548 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=3308MiB (3469MB), run=60002-60002msec 00:15:18.548 00:15:18.548 Disk stats (read/write): 00:15:18.548 ublkb1: ios=845007/843910, merge=0/0, ticks=3918535/3551061, in_queue=7469596, util=99.90% 00:15:18.548 11:46:29 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 [2024-11-19 11:46:29.595318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:18.548 [2024-11-19 11:46:29.630546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:18.548 [2024-11-19 11:46:29.633568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:18.548 [2024-11-19 11:46:29.640444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:18.548 [2024-11-19 11:46:29.640562] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:18.548 [2024-11-19 11:46:29.640588] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.548 11:46:29 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 [2024-11-19 11:46:29.648525] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.548 [2024-11-19 11:46:29.649820] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:18.548 [2024-11-19 11:46:29.649857] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.548 11:46:29 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:18.548 11:46:29 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:18.548 11:46:29 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82948 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 82948 ']' 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 82948 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82948 00:15:18.548 killing process with pid 82948 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82948' 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@969 -- # kill 82948 00:15:18.548 11:46:29 ublk_recovery -- common/autotest_common.sh@974 -- # wait 82948 00:15:18.548 [2024-11-19 11:46:29.907501] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:18.548 [2024-11-19 11:46:29.907550] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:18.548 ************************************ 00:15:18.548 END TEST ublk_recovery 00:15:18.548 ************************************ 00:15:18.548 00:15:18.548 real 1m3.111s 00:15:18.548 user 1m42.184s 00:15:18.548 sys 0m25.097s 00:15:18.548 11:46:30 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:18.548 11:46:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 11:46:30 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:18.548 11:46:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:18.548 11:46:30 -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 11:46:30 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:18.548 11:46:30 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.548 11:46:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:18.548 11:46:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:18.548 11:46:30 -- common/autotest_common.sh@10 -- # set +x 00:15:18.548 ************************************ 00:15:18.548 START TEST ftl 00:15:18.548 ************************************ 00:15:18.548 11:46:30 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.548 * Looking for test storage... 00:15:18.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.548 11:46:30 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.549 11:46:30 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.549 11:46:30 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.549 11:46:30 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.549 11:46:30 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.549 11:46:30 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.549 11:46:30 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:18.549 11:46:30 ftl -- scripts/common.sh@345 -- # : 1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.549 11:46:30 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.549 11:46:30 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@353 -- # local d=1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.549 11:46:30 ftl -- scripts/common.sh@355 -- # echo 1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.549 11:46:30 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@353 -- # local d=2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.549 11:46:30 ftl -- scripts/common.sh@355 -- # echo 2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.549 11:46:30 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.549 11:46:30 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.549 11:46:30 ftl -- scripts/common.sh@368 -- # return 0 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.549 --rc genhtml_branch_coverage=1 00:15:18.549 --rc genhtml_function_coverage=1 00:15:18.549 --rc genhtml_legend=1 00:15:18.549 --rc geninfo_all_blocks=1 00:15:18.549 --rc geninfo_unexecuted_blocks=1 00:15:18.549 00:15:18.549 ' 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.549 --rc genhtml_branch_coverage=1 00:15:18.549 --rc genhtml_function_coverage=1 00:15:18.549 --rc genhtml_legend=1 00:15:18.549 --rc geninfo_all_blocks=1 00:15:18.549 --rc geninfo_unexecuted_blocks=1 00:15:18.549 00:15:18.549 ' 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.549 --rc genhtml_branch_coverage=1 00:15:18.549 --rc genhtml_function_coverage=1 00:15:18.549 --rc genhtml_legend=1 00:15:18.549 --rc geninfo_all_blocks=1 00:15:18.549 --rc geninfo_unexecuted_blocks=1 00:15:18.549 00:15:18.549 ' 00:15:18.549 11:46:30 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.549 --rc genhtml_branch_coverage=1 00:15:18.549 --rc genhtml_function_coverage=1 00:15:18.549 --rc genhtml_legend=1 00:15:18.549 --rc geninfo_all_blocks=1 00:15:18.549 --rc geninfo_unexecuted_blocks=1 00:15:18.549 00:15:18.549 ' 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:18.549 11:46:30 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:18.549 11:46:30 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.549 11:46:30 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:18.549 11:46:30 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:18.549 11:46:30 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:18.549 11:46:30 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.549 11:46:30 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:18.549 11:46:30 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:18.549 11:46:30 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.549 11:46:30 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.549 11:46:30 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:18.549 11:46:30 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:18.549 11:46:30 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.549 11:46:30 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:18.549 11:46:30 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:18.549 11:46:30 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:18.549 11:46:30 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.549 11:46:30 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:18.549 11:46:30 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:18.549 11:46:30 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:18.549 11:46:30 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.549 11:46:30 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:18.549 11:46:30 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.549 11:46:30 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:18.549 11:46:30 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:18.549 11:46:30 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:18.549 11:46:30 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.549 11:46:30 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:18.549 11:46:30 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.549 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.549 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.549 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.549 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:18.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.549 11:46:31 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83747 00:15:18.549 11:46:31 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83747 00:15:18.549 11:46:31 ftl -- common/autotest_common.sh@831 -- # '[' -z 83747 ']' 00:15:18.549 11:46:31 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.549 11:46:31 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:18.549 11:46:31 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:18.549 11:46:31 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.549 11:46:31 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:18.549 11:46:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:18.549 [2024-11-19 11:46:31.177469] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:18.549 [2024-11-19 11:46:31.178294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83747 ] 00:15:18.549 [2024-11-19 11:46:31.312522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.549 [2024-11-19 11:46:31.385732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.811 11:46:32 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.811 11:46:32 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:18.811 11:46:32 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:19.069 11:46:32 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:19.327 11:46:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:19.327 11:46:32 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@50 -- # break 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:19.890 11:46:33 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:20.148 11:46:33 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:20.148 11:46:33 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:20.148 11:46:33 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:20.148 11:46:33 ftl -- ftl/ftl.sh@63 -- # break 00:15:20.148 11:46:33 ftl -- ftl/ftl.sh@66 -- # killprocess 83747 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 83747 ']' 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@954 -- # kill -0 83747 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@955 -- # uname 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83747 00:15:20.148 killing process with pid 83747 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83747' 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@969 -- # kill 83747 00:15:20.148 11:46:33 ftl -- common/autotest_common.sh@974 -- # wait 83747 00:15:20.407 11:46:33 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:20.407 11:46:33 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:20.407 11:46:33 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:20.407 11:46:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.407 11:46:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:20.407 ************************************ 00:15:20.407 START TEST ftl_fio_basic 00:15:20.407 ************************************ 00:15:20.407 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:20.668 * Looking for test storage... 00:15:20.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:20.668 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:20.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.668 --rc genhtml_branch_coverage=1 00:15:20.669 --rc genhtml_function_coverage=1 00:15:20.669 --rc genhtml_legend=1 00:15:20.669 --rc geninfo_all_blocks=1 00:15:20.669 --rc geninfo_unexecuted_blocks=1 00:15:20.669 00:15:20.669 ' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:20.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.669 --rc genhtml_branch_coverage=1 00:15:20.669 --rc genhtml_function_coverage=1 00:15:20.669 --rc genhtml_legend=1 00:15:20.669 --rc geninfo_all_blocks=1 00:15:20.669 --rc geninfo_unexecuted_blocks=1 00:15:20.669 00:15:20.669 ' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:20.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.669 --rc genhtml_branch_coverage=1 00:15:20.669 --rc genhtml_function_coverage=1 00:15:20.669 --rc genhtml_legend=1 00:15:20.669 --rc geninfo_all_blocks=1 00:15:20.669 --rc geninfo_unexecuted_blocks=1 00:15:20.669 00:15:20.669 ' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:20.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:20.669 --rc genhtml_branch_coverage=1 00:15:20.669 --rc genhtml_function_coverage=1 00:15:20.669 --rc genhtml_legend=1 00:15:20.669 --rc geninfo_all_blocks=1 00:15:20.669 --rc geninfo_unexecuted_blocks=1 00:15:20.669 00:15:20.669 ' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83868 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83868 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 83868 ']' 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.669 11:46:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:20.669 [2024-11-19 11:46:34.060689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:20.669 [2024-11-19 11:46:34.060964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83868 ] 00:15:20.931 [2024-11-19 11:46:34.197587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.931 [2024-11-19 11:46:34.261147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.931 [2024-11-19 11:46:34.261496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.931 [2024-11-19 11:46:34.261499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.503 11:46:34 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.503 11:46:34 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:21.503 11:46:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:21.503 11:46:34 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:21.764 11:46:34 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:21.764 11:46:34 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:21.764 11:46:34 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:21.764 11:46:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:22.026 { 00:15:22.026 "name": "nvme0n1", 00:15:22.026 "aliases": [ 00:15:22.026 "5bcc213b-94b1-4b9e-88eb-90ad8cf944a7" 00:15:22.026 ], 00:15:22.026 "product_name": "NVMe disk", 00:15:22.026 "block_size": 4096, 00:15:22.026 "num_blocks": 1310720, 00:15:22.026 "uuid": "5bcc213b-94b1-4b9e-88eb-90ad8cf944a7", 00:15:22.026 "numa_id": -1, 00:15:22.026 "assigned_rate_limits": { 00:15:22.026 "rw_ios_per_sec": 0, 00:15:22.026 "rw_mbytes_per_sec": 0, 00:15:22.026 "r_mbytes_per_sec": 0, 00:15:22.026 "w_mbytes_per_sec": 0 00:15:22.026 }, 00:15:22.026 "claimed": false, 00:15:22.026 "zoned": false, 00:15:22.026 "supported_io_types": { 00:15:22.026 "read": true, 00:15:22.026 "write": true, 00:15:22.026 "unmap": true, 00:15:22.026 "flush": true, 00:15:22.026 "reset": true, 00:15:22.026 "nvme_admin": true, 00:15:22.026 "nvme_io": true, 00:15:22.026 "nvme_io_md": false, 00:15:22.026 "write_zeroes": true, 00:15:22.026 "zcopy": false, 00:15:22.026 "get_zone_info": false, 00:15:22.026 "zone_management": false, 00:15:22.026 "zone_append": false, 00:15:22.026 "compare": true, 00:15:22.026 "compare_and_write": false, 00:15:22.026 "abort": true, 00:15:22.026 "seek_hole": false, 00:15:22.026 "seek_data": false, 00:15:22.026 "copy": true, 00:15:22.026 "nvme_iov_md": false 00:15:22.026 }, 00:15:22.026 "driver_specific": { 00:15:22.026 "nvme": [ 00:15:22.026 { 00:15:22.026 "pci_address": "0000:00:11.0", 00:15:22.026 "trid": { 00:15:22.026 "trtype": "PCIe", 00:15:22.026 "traddr": "0000:00:11.0" 00:15:22.026 }, 00:15:22.026 "ctrlr_data": { 00:15:22.026 "cntlid": 0, 00:15:22.026 "vendor_id": "0x1b36", 00:15:22.026 "model_number": "QEMU NVMe Ctrl", 00:15:22.026 "serial_number": "12341", 00:15:22.026 "firmware_revision": "8.0.0", 00:15:22.026 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:22.026 "oacs": { 00:15:22.026 "security": 0, 00:15:22.026 "format": 1, 00:15:22.026 "firmware": 0, 00:15:22.026 "ns_manage": 1 00:15:22.026 }, 00:15:22.026 "multi_ctrlr": false, 00:15:22.026 "ana_reporting": false 00:15:22.026 }, 00:15:22.026 "vs": { 00:15:22.026 "nvme_version": "1.4" 00:15:22.026 }, 00:15:22.026 "ns_data": { 00:15:22.026 "id": 1, 00:15:22.026 "can_share": false 00:15:22.026 } 00:15:22.026 } 00:15:22.026 ], 00:15:22.026 "mp_policy": "active_passive" 00:15:22.026 } 00:15:22.026 } 00:15:22.026 ]' 00:15:22.026 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:22.288 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:22.288 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:22.288 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:22.288 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:22.289 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:22.547 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=6f7311f6-86c9-4b65-9f9f-479731b8c7aa 00:15:22.547 11:46:35 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6f7311f6-86c9-4b65-9f9f-479731b8c7aa 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:22.806 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:23.064 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:23.064 { 00:15:23.064 "name": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:23.064 "aliases": [ 00:15:23.064 "lvs/nvme0n1p0" 00:15:23.064 ], 00:15:23.064 "product_name": "Logical Volume", 00:15:23.064 "block_size": 4096, 00:15:23.064 "num_blocks": 26476544, 00:15:23.064 "uuid": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:23.064 "assigned_rate_limits": { 00:15:23.064 "rw_ios_per_sec": 0, 00:15:23.064 "rw_mbytes_per_sec": 0, 00:15:23.064 "r_mbytes_per_sec": 0, 00:15:23.064 "w_mbytes_per_sec": 0 00:15:23.064 }, 00:15:23.064 "claimed": false, 00:15:23.064 "zoned": false, 00:15:23.064 "supported_io_types": { 00:15:23.064 "read": true, 00:15:23.064 "write": true, 00:15:23.064 "unmap": true, 00:15:23.064 "flush": false, 00:15:23.064 "reset": true, 00:15:23.064 "nvme_admin": false, 00:15:23.064 "nvme_io": false, 00:15:23.064 "nvme_io_md": false, 00:15:23.064 "write_zeroes": true, 00:15:23.064 "zcopy": false, 00:15:23.064 "get_zone_info": false, 00:15:23.064 "zone_management": false, 00:15:23.064 "zone_append": false, 00:15:23.064 "compare": false, 00:15:23.064 "compare_and_write": false, 00:15:23.064 "abort": false, 00:15:23.064 "seek_hole": true, 00:15:23.064 "seek_data": true, 00:15:23.064 "copy": false, 00:15:23.064 "nvme_iov_md": false 00:15:23.064 }, 00:15:23.064 "driver_specific": { 00:15:23.064 "lvol": { 00:15:23.064 "lvol_store_uuid": "6f7311f6-86c9-4b65-9f9f-479731b8c7aa", 00:15:23.064 "base_bdev": "nvme0n1", 00:15:23.064 "thin_provision": true, 00:15:23.064 "num_allocated_clusters": 0, 00:15:23.064 "snapshot": false, 00:15:23.064 "clone": false, 00:15:23.065 "esnap_clone": false 00:15:23.065 } 00:15:23.065 } 00:15:23.065 } 00:15:23.065 ]' 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:23.065 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:23.323 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:23.582 { 00:15:23.582 "name": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:23.582 "aliases": [ 00:15:23.582 "lvs/nvme0n1p0" 00:15:23.582 ], 00:15:23.582 "product_name": "Logical Volume", 00:15:23.582 "block_size": 4096, 00:15:23.582 "num_blocks": 26476544, 00:15:23.582 "uuid": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:23.582 "assigned_rate_limits": { 00:15:23.582 "rw_ios_per_sec": 0, 00:15:23.582 "rw_mbytes_per_sec": 0, 00:15:23.582 "r_mbytes_per_sec": 0, 00:15:23.582 "w_mbytes_per_sec": 0 00:15:23.582 }, 00:15:23.582 "claimed": false, 00:15:23.582 "zoned": false, 00:15:23.582 "supported_io_types": { 00:15:23.582 "read": true, 00:15:23.582 "write": true, 00:15:23.582 "unmap": true, 00:15:23.582 "flush": false, 00:15:23.582 "reset": true, 00:15:23.582 "nvme_admin": false, 00:15:23.582 "nvme_io": false, 00:15:23.582 "nvme_io_md": false, 00:15:23.582 "write_zeroes": true, 00:15:23.582 "zcopy": false, 00:15:23.582 "get_zone_info": false, 00:15:23.582 "zone_management": false, 00:15:23.582 "zone_append": false, 00:15:23.582 "compare": false, 00:15:23.582 "compare_and_write": false, 00:15:23.582 "abort": false, 00:15:23.582 "seek_hole": true, 00:15:23.582 "seek_data": true, 00:15:23.582 "copy": false, 00:15:23.582 "nvme_iov_md": false 00:15:23.582 }, 00:15:23.582 "driver_specific": { 00:15:23.582 "lvol": { 00:15:23.582 "lvol_store_uuid": "6f7311f6-86c9-4b65-9f9f-479731b8c7aa", 00:15:23.582 "base_bdev": "nvme0n1", 00:15:23.582 "thin_provision": true, 00:15:23.582 "num_allocated_clusters": 0, 00:15:23.582 "snapshot": false, 00:15:23.582 "clone": false, 00:15:23.582 "esnap_clone": false 00:15:23.582 } 00:15:23.582 } 00:15:23.582 } 00:15:23.582 ]' 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:23.582 11:46:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:23.840 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:23.840 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e90c1cb-118e-4d99-8863-d2949f41a7d8 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:24.100 { 00:15:24.100 "name": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:24.100 "aliases": [ 00:15:24.100 "lvs/nvme0n1p0" 00:15:24.100 ], 00:15:24.100 "product_name": "Logical Volume", 00:15:24.100 "block_size": 4096, 00:15:24.100 "num_blocks": 26476544, 00:15:24.100 "uuid": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:24.100 "assigned_rate_limits": { 00:15:24.100 "rw_ios_per_sec": 0, 00:15:24.100 "rw_mbytes_per_sec": 0, 00:15:24.100 "r_mbytes_per_sec": 0, 00:15:24.100 "w_mbytes_per_sec": 0 00:15:24.100 }, 00:15:24.100 "claimed": false, 00:15:24.100 "zoned": false, 00:15:24.100 "supported_io_types": { 00:15:24.100 "read": true, 00:15:24.100 "write": true, 00:15:24.100 "unmap": true, 00:15:24.100 "flush": false, 00:15:24.100 "reset": true, 00:15:24.100 "nvme_admin": false, 00:15:24.100 "nvme_io": false, 00:15:24.100 "nvme_io_md": false, 00:15:24.100 "write_zeroes": true, 00:15:24.100 "zcopy": false, 00:15:24.100 "get_zone_info": false, 00:15:24.100 "zone_management": false, 00:15:24.100 "zone_append": false, 00:15:24.100 "compare": false, 00:15:24.100 "compare_and_write": false, 00:15:24.100 "abort": false, 00:15:24.100 "seek_hole": true, 00:15:24.100 "seek_data": true, 00:15:24.100 "copy": false, 00:15:24.100 "nvme_iov_md": false 00:15:24.100 }, 00:15:24.100 "driver_specific": { 00:15:24.100 "lvol": { 00:15:24.100 "lvol_store_uuid": "6f7311f6-86c9-4b65-9f9f-479731b8c7aa", 00:15:24.100 "base_bdev": "nvme0n1", 00:15:24.100 "thin_provision": true, 00:15:24.100 "num_allocated_clusters": 0, 00:15:24.100 "snapshot": false, 00:15:24.100 "clone": false, 00:15:24.100 "esnap_clone": false 00:15:24.100 } 00:15:24.100 } 00:15:24.100 } 00:15:24.100 ]' 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:24.100 11:46:37 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3e90c1cb-118e-4d99-8863-d2949f41a7d8 -c nvc0n1p0 --l2p_dram_limit 60 00:15:24.361 [2024-11-19 11:46:37.520929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.520974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:24.361 [2024-11-19 11:46:37.520987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:24.361 [2024-11-19 11:46:37.520996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.521057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.521068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:24.361 [2024-11-19 11:46:37.521077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:15:24.361 [2024-11-19 11:46:37.521089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.521109] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:24.361 [2024-11-19 11:46:37.521349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:24.361 [2024-11-19 11:46:37.521370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.521380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:24.361 [2024-11-19 11:46:37.521387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:15:24.361 [2024-11-19 11:46:37.521395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.521463] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cbf0b110-9408-48ec-bc76-c40a5a75577f 00:15:24.361 [2024-11-19 11:46:37.522759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.522932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:24.361 [2024-11-19 11:46:37.522988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:15:24.361 [2024-11-19 11:46:37.523029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.529853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.529973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:24.361 [2024-11-19 11:46:37.530013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.735 ms 00:15:24.361 [2024-11-19 11:46:37.530047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.530172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.530215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:24.361 [2024-11-19 11:46:37.530262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:24.361 [2024-11-19 11:46:37.530294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.530376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.530434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:24.361 [2024-11-19 11:46:37.530480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:24.361 [2024-11-19 11:46:37.530515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.530565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:24.361 [2024-11-19 11:46:37.532201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.361 [2024-11-19 11:46:37.532287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:24.361 [2024-11-19 11:46:37.532331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:15:24.361 [2024-11-19 11:46:37.532367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.361 [2024-11-19 11:46:37.532441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.362 [2024-11-19 11:46:37.532481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:24.362 [2024-11-19 11:46:37.532525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:15:24.362 [2024-11-19 11:46:37.532560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.362 [2024-11-19 11:46:37.532613] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:24.362 [2024-11-19 11:46:37.532777] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:24.362 [2024-11-19 11:46:37.532828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:24.362 [2024-11-19 11:46:37.532869] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:24.362 [2024-11-19 11:46:37.532916] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:24.362 [2024-11-19 11:46:37.532963] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:24.362 [2024-11-19 11:46:37.533001] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:24.362 [2024-11-19 11:46:37.533038] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:24.362 [2024-11-19 11:46:37.533081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:24.362 [2024-11-19 11:46:37.533116] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:24.362 [2024-11-19 11:46:37.533148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.362 [2024-11-19 11:46:37.533179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:24.362 [2024-11-19 11:46:37.533216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:15:24.362 [2024-11-19 11:46:37.533248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.362 [2024-11-19 11:46:37.533341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.362 [2024-11-19 11:46:37.533380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:24.362 [2024-11-19 11:46:37.533436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:15:24.362 [2024-11-19 11:46:37.533479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.362 [2024-11-19 11:46:37.533595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:24.362 [2024-11-19 11:46:37.533632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:24.362 [2024-11-19 11:46:37.533664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:24.362 [2024-11-19 11:46:37.533698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.533729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:24.362 [2024-11-19 11:46:37.533761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.533792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:24.362 [2024-11-19 11:46:37.533821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:24.362 [2024-11-19 11:46:37.533850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:24.362 [2024-11-19 11:46:37.533879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:24.362 [2024-11-19 11:46:37.533910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:24.362 [2024-11-19 11:46:37.533937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:24.362 [2024-11-19 11:46:37.533970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:24.362 [2024-11-19 11:46:37.533999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:24.362 [2024-11-19 11:46:37.534027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:24.362 [2024-11-19 11:46:37.534058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:24.362 [2024-11-19 11:46:37.534115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:24.362 [2024-11-19 11:46:37.534145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:24.362 [2024-11-19 11:46:37.534204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:24.362 [2024-11-19 11:46:37.534263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:24.362 [2024-11-19 11:46:37.534294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:24.362 [2024-11-19 11:46:37.534349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:24.362 [2024-11-19 11:46:37.534379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:24.362 [2024-11-19 11:46:37.534455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:24.362 [2024-11-19 11:46:37.534491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:24.362 [2024-11-19 11:46:37.534549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:24.362 [2024-11-19 11:46:37.534579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:24.362 [2024-11-19 11:46:37.534640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:24.362 [2024-11-19 11:46:37.534673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:24.362 [2024-11-19 11:46:37.534701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:24.362 [2024-11-19 11:46:37.534728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:24.362 [2024-11-19 11:46:37.534756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:24.362 [2024-11-19 11:46:37.534786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:24.362 [2024-11-19 11:46:37.534841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:24.362 [2024-11-19 11:46:37.534871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.534897] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:24.362 [2024-11-19 11:46:37.534938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:24.362 [2024-11-19 11:46:37.534970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:24.362 [2024-11-19 11:46:37.534996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:24.362 [2024-11-19 11:46:37.535024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:24.362 [2024-11-19 11:46:37.535053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:24.362 [2024-11-19 11:46:37.535082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:24.362 [2024-11-19 11:46:37.535113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:24.362 [2024-11-19 11:46:37.535142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:24.362 [2024-11-19 11:46:37.535172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:24.362 [2024-11-19 11:46:37.535207] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:24.362 [2024-11-19 11:46:37.535241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:24.362 [2024-11-19 11:46:37.535301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:24.362 [2024-11-19 11:46:37.535334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:24.362 [2024-11-19 11:46:37.535364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:24.362 [2024-11-19 11:46:37.535388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:24.362 [2024-11-19 11:46:37.535435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:24.362 [2024-11-19 11:46:37.535470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:24.362 [2024-11-19 11:46:37.535499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:24.362 [2024-11-19 11:46:37.535529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:24.362 [2024-11-19 11:46:37.535558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:24.362 [2024-11-19 11:46:37.535706] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:24.362 [2024-11-19 11:46:37.535735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:24.362 [2024-11-19 11:46:37.535798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:24.362 [2024-11-19 11:46:37.535842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:24.362 [2024-11-19 11:46:37.535873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:24.362 [2024-11-19 11:46:37.535905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:24.362 [2024-11-19 11:46:37.535929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:24.363 [2024-11-19 11:46:37.535965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.358 ms 00:15:24.363 [2024-11-19 11:46:37.535997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:24.363 [2024-11-19 11:46:37.536095] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:24.363 [2024-11-19 11:46:37.536133] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:27.065 [2024-11-19 11:46:40.047792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.048224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:27.065 [2024-11-19 11:46:40.048299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2511.674 ms 00:15:27.065 [2024-11-19 11:46:40.048350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.070743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.071016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:27.065 [2024-11-19 11:46:40.071191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.238 ms 00:15:27.065 [2024-11-19 11:46:40.071344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.071757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.071908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:27.065 [2024-11-19 11:46:40.072036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:15:27.065 [2024-11-19 11:46:40.072161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.084951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.085056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:27.065 [2024-11-19 11:46:40.085117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.505 ms 00:15:27.065 [2024-11-19 11:46:40.085165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.085249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.085286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:27.065 [2024-11-19 11:46:40.085331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:27.065 [2024-11-19 11:46:40.085376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.085865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.085950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:27.065 [2024-11-19 11:46:40.085994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:15:27.065 [2024-11-19 11:46:40.086037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.086205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.086244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:27.065 [2024-11-19 11:46:40.086303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:15:27.065 [2024-11-19 11:46:40.086348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.093540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.093613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:27.065 [2024-11-19 11:46:40.093664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.116 ms 00:15:27.065 [2024-11-19 11:46:40.093702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.065 [2024-11-19 11:46:40.102756] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:27.065 [2024-11-19 11:46:40.120239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.065 [2024-11-19 11:46:40.120332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:27.065 [2024-11-19 11:46:40.120400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.427 ms 00:15:27.066 [2024-11-19 11:46:40.120457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.166329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.166458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:27.066 [2024-11-19 11:46:40.166519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.807 ms 00:15:27.066 [2024-11-19 11:46:40.166566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.166786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.166829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:27.066 [2024-11-19 11:46:40.166878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:15:27.066 [2024-11-19 11:46:40.166923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.169946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.170039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:27.066 [2024-11-19 11:46:40.170088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:15:27.066 [2024-11-19 11:46:40.170131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.172700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.172779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:27.066 [2024-11-19 11:46:40.172834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:15:27.066 [2024-11-19 11:46:40.172872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.173218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.173277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:27.066 [2024-11-19 11:46:40.173321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:15:27.066 [2024-11-19 11:46:40.173364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.199319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.199441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:27.066 [2024-11-19 11:46:40.199489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.890 ms 00:15:27.066 [2024-11-19 11:46:40.199539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.203698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.203778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:27.066 [2024-11-19 11:46:40.203823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.055 ms 00:15:27.066 [2024-11-19 11:46:40.203880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.206775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.206861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:27.066 [2024-11-19 11:46:40.206904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:15:27.066 [2024-11-19 11:46:40.206949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.209956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.210051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:27.066 [2024-11-19 11:46:40.210105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.932 ms 00:15:27.066 [2024-11-19 11:46:40.210151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.210227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.210265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:27.066 [2024-11-19 11:46:40.210317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:27.066 [2024-11-19 11:46:40.210363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.210501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.066 [2024-11-19 11:46:40.210544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:27.066 [2024-11-19 11:46:40.210590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:27.066 [2024-11-19 11:46:40.210634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.066 [2024-11-19 11:46:40.211730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2690.303 ms, result 0 00:15:27.066 { 00:15:27.066 "name": "ftl0", 00:15:27.066 "uuid": "cbf0b110-9408-48ec-bc76-c40a5a75577f" 00:15:27.066 } 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.066 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:27.327 [ 00:15:27.327 { 00:15:27.327 "name": "ftl0", 00:15:27.327 "aliases": [ 00:15:27.327 "cbf0b110-9408-48ec-bc76-c40a5a75577f" 00:15:27.327 ], 00:15:27.327 "product_name": "FTL disk", 00:15:27.327 "block_size": 4096, 00:15:27.327 "num_blocks": 20971520, 00:15:27.327 "uuid": "cbf0b110-9408-48ec-bc76-c40a5a75577f", 00:15:27.327 "assigned_rate_limits": { 00:15:27.327 "rw_ios_per_sec": 0, 00:15:27.327 "rw_mbytes_per_sec": 0, 00:15:27.327 "r_mbytes_per_sec": 0, 00:15:27.327 "w_mbytes_per_sec": 0 00:15:27.327 }, 00:15:27.327 "claimed": false, 00:15:27.327 "zoned": false, 00:15:27.327 "supported_io_types": { 00:15:27.327 "read": true, 00:15:27.327 "write": true, 00:15:27.327 "unmap": true, 00:15:27.327 "flush": true, 00:15:27.327 "reset": false, 00:15:27.327 "nvme_admin": false, 00:15:27.327 "nvme_io": false, 00:15:27.327 "nvme_io_md": false, 00:15:27.327 "write_zeroes": true, 00:15:27.327 "zcopy": false, 00:15:27.327 "get_zone_info": false, 00:15:27.327 "zone_management": false, 00:15:27.327 "zone_append": false, 00:15:27.327 "compare": false, 00:15:27.327 "compare_and_write": false, 00:15:27.327 "abort": false, 00:15:27.327 "seek_hole": false, 00:15:27.327 "seek_data": false, 00:15:27.327 "copy": false, 00:15:27.327 "nvme_iov_md": false 00:15:27.327 }, 00:15:27.327 "driver_specific": { 00:15:27.327 "ftl": { 00:15:27.327 "base_bdev": "3e90c1cb-118e-4d99-8863-d2949f41a7d8", 00:15:27.327 "cache": "nvc0n1p0" 00:15:27.327 } 00:15:27.327 } 00:15:27.327 } 00:15:27.327 ] 00:15:27.327 11:46:40 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:27.327 11:46:40 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:27.327 11:46:40 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:27.327 11:46:40 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:27.592 11:46:40 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:27.592 [2024-11-19 11:46:40.924143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.924293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:27.592 [2024-11-19 11:46:40.924350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:27.592 [2024-11-19 11:46:40.924402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.924508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:27.592 [2024-11-19 11:46:40.925155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.925245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:27.592 [2024-11-19 11:46:40.925286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:15:27.592 [2024-11-19 11:46:40.925328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.925838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.925903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:27.592 [2024-11-19 11:46:40.925947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:15:27.592 [2024-11-19 11:46:40.925985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.929253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.929320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:27.592 [2024-11-19 11:46:40.929355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:15:27.592 [2024-11-19 11:46:40.929393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.935671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.935763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:27.592 [2024-11-19 11:46:40.935828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.200 ms 00:15:27.592 [2024-11-19 11:46:40.935871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.937554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.937640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:27.592 [2024-11-19 11:46:40.937687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:15:27.592 [2024-11-19 11:46:40.937728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.941911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.942003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:27.592 [2024-11-19 11:46:40.942051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:15:27.592 [2024-11-19 11:46:40.942093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.942289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.942333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:27.592 [2024-11-19 11:46:40.942371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:15:27.592 [2024-11-19 11:46:40.942420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.943929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.944015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:27.592 [2024-11-19 11:46:40.944058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:15:27.592 [2024-11-19 11:46:40.944102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.945206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.945283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:27.592 [2024-11-19 11:46:40.945317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:15:27.592 [2024-11-19 11:46:40.945373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.946290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.946374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:27.592 [2024-11-19 11:46:40.946439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:15:27.592 [2024-11-19 11:46:40.946487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.947365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.592 [2024-11-19 11:46:40.947459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:27.592 [2024-11-19 11:46:40.947504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:15:27.592 [2024-11-19 11:46:40.947549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.592 [2024-11-19 11:46:40.947627] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:27.592 [2024-11-19 11:46:40.947677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.947968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.948969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:27.592 [2024-11-19 11:46:40.949012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.949992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.950972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:27.593 [2024-11-19 11:46:40.951549] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:27.593 [2024-11-19 11:46:40.951585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cbf0b110-9408-48ec-bc76-c40a5a75577f 00:15:27.593 [2024-11-19 11:46:40.951621] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:27.593 [2024-11-19 11:46:40.951656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:27.593 [2024-11-19 11:46:40.951696] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:27.593 [2024-11-19 11:46:40.951733] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:27.593 [2024-11-19 11:46:40.951773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:27.593 [2024-11-19 11:46:40.951804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:27.593 [2024-11-19 11:46:40.951848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:27.593 [2024-11-19 11:46:40.951883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:27.593 [2024-11-19 11:46:40.951918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:27.594 [2024-11-19 11:46:40.951957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.594 [2024-11-19 11:46:40.951992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:27.594 [2024-11-19 11:46:40.952031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:15:27.594 [2024-11-19 11:46:40.952064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.953979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.594 [2024-11-19 11:46:40.954064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:27.594 [2024-11-19 11:46:40.954104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:15:27.594 [2024-11-19 11:46:40.954145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.954284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:27.594 [2024-11-19 11:46:40.954329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:27.594 [2024-11-19 11:46:40.954372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:15:27.594 [2024-11-19 11:46:40.954425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.961009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.961112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:27.594 [2024-11-19 11:46:40.961149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.961202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.961298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.961344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:27.594 [2024-11-19 11:46:40.961384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.961452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.961576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.961624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:27.594 [2024-11-19 11:46:40.961657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.961692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.961752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.961791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:27.594 [2024-11-19 11:46:40.961830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.961870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.974397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.974516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:27.594 [2024-11-19 11:46:40.974575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.974589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.984397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.984449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:27.594 [2024-11-19 11:46:40.984461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.984482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.984591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.984607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:27.594 [2024-11-19 11:46:40.984617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.984627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.984705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.984717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:27.594 [2024-11-19 11:46:40.984726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.984737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.984819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.984832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:27.594 [2024-11-19 11:46:40.984840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.984852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.984898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.984911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:27.594 [2024-11-19 11:46:40.984921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.984930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.984977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.984991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:27.594 [2024-11-19 11:46:40.985000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.985012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.985066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:27.594 [2024-11-19 11:46:40.985078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:27.594 [2024-11-19 11:46:40.985087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:27.594 [2024-11-19 11:46:40.985096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:27.594 [2024-11-19 11:46:40.985271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.093 ms, result 0 00:15:27.594 true 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83868 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 83868 ']' 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 83868 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83868 00:15:27.855 killing process with pid 83868 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83868' 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 83868 00:15:27.855 11:46:41 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 83868 00:15:31.156 11:46:43 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:31.156 11:46:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:31.156 11:46:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:31.156 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.156 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:31.156 11:46:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:31.157 11:46:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:31.157 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:31.157 fio-3.35 00:15:31.157 Starting 1 thread 00:15:34.461 00:15:34.461 test: (groupid=0, jobs=1): err= 0: pid=84027: Tue Nov 19 11:46:47 2024 00:15:34.461 read: IOPS=1260, BW=83.7MiB/s (87.8MB/s)(255MiB/3040msec) 00:15:34.461 slat (nsec): min=4120, max=23857, avg=5476.62, stdev=1890.11 00:15:34.461 clat (usec): min=270, max=973, avg=356.05, stdev=65.08 00:15:34.461 lat (usec): min=274, max=985, avg=361.53, stdev=65.69 00:15:34.461 clat percentiles (usec): 00:15:34.461 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 322], 00:15:34.461 | 30.00th=[ 326], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:15:34.461 | 70.00th=[ 338], 80.00th=[ 388], 90.00th=[ 453], 95.00th=[ 502], 00:15:34.461 | 99.00th=[ 611], 99.50th=[ 660], 99.90th=[ 783], 99.95th=[ 807], 00:15:34.461 | 99.99th=[ 971] 00:15:34.461 write: IOPS=1269, BW=84.3MiB/s (88.4MB/s)(256MiB/3037msec); 0 zone resets 00:15:34.461 slat (nsec): min=14681, max=90219, avg=18629.08, stdev=3391.54 00:15:34.461 clat (usec): min=305, max=1768, avg=396.86, stdev=105.42 00:15:34.461 lat (usec): min=320, max=1820, avg=415.49, stdev=106.27 00:15:34.461 clat percentiles (usec): 00:15:34.461 | 1.00th=[ 338], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 351], 00:15:34.461 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 355], 60.00th=[ 363], 00:15:34.461 | 70.00th=[ 371], 80.00th=[ 420], 90.00th=[ 498], 95.00th=[ 586], 00:15:34.461 | 99.00th=[ 881], 99.50th=[ 971], 99.90th=[ 1319], 99.95th=[ 1663], 00:15:34.461 | 99.99th=[ 1762] 00:15:34.461 bw ( KiB/s): min=80920, max=91120, per=100.00%, avg=86337.33, stdev=4707.58, samples=6 00:15:34.461 iops : min= 1190, max= 1340, avg=1269.67, stdev=69.23, samples=6 00:15:34.461 lat (usec) : 500=92.60%, 750=6.45%, 1000=0.74% 00:15:34.461 lat (msec) : 2=0.21% 00:15:34.461 cpu : usr=99.01%, sys=0.26%, ctx=5, majf=0, minf=1326 00:15:34.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:34.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.461 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:34.461 00:15:34.461 Run status group 0 (all jobs): 00:15:34.461 READ: bw=83.7MiB/s (87.8MB/s), 83.7MiB/s-83.7MiB/s (87.8MB/s-87.8MB/s), io=255MiB (267MB), run=3040-3040msec 00:15:34.461 WRITE: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=256MiB (269MB), run=3037-3037msec 00:15:35.404 ----------------------------------------------------- 00:15:35.404 Suppressions used: 00:15:35.404 count bytes template 00:15:35.404 1 5 /usr/src/fio/parse.c 00:15:35.404 1 8 libtcmalloc_minimal.so 00:15:35.404 1 904 libcrypto.so 00:15:35.404 ----------------------------------------------------- 00:15:35.404 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:35.404 11:46:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:35.404 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:35.404 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:35.404 fio-3.35 00:15:35.404 Starting 2 threads 00:16:07.518 00:16:07.518 first_half: (groupid=0, jobs=1): err= 0: pid=84113: Tue Nov 19 11:47:17 2024 00:16:07.518 read: IOPS=2383, BW=9535KiB/s (9764kB/s)(255MiB/27379msec) 00:16:07.518 slat (nsec): min=2936, max=35780, avg=4886.63, stdev=1092.95 00:16:07.518 clat (usec): min=711, max=338156, avg=37182.57, stdev=22427.01 00:16:07.518 lat (usec): min=716, max=338161, avg=37187.46, stdev=22427.02 00:16:07.518 clat percentiles (msec): 00:16:07.518 | 1.00th=[ 5], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:16:07.518 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:16:07.518 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 44], 95.00th=[ 52], 00:16:07.518 | 99.00th=[ 144], 99.50th=[ 209], 99.90th=[ 313], 99.95th=[ 321], 00:16:07.518 | 99.99th=[ 330] 00:16:07.518 write: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(256MiB/21905msec); 0 zone resets 00:16:07.518 slat (usec): min=3, max=423, avg= 6.36, stdev= 3.39 00:16:07.518 clat (usec): min=463, max=130358, avg=16379.27, stdev=29636.14 00:16:07.518 lat (usec): min=468, max=130364, avg=16385.63, stdev=29636.17 00:16:07.518 clat percentiles (usec): 00:16:07.518 | 1.00th=[ 922], 5.00th=[ 1205], 10.00th=[ 1369], 20.00th=[ 1631], 00:16:07.518 | 30.00th=[ 1893], 40.00th=[ 2343], 50.00th=[ 4228], 60.00th=[ 6521], 00:16:07.518 | 70.00th=[ 8979], 80.00th=[ 13173], 90.00th=[ 80217], 95.00th=[ 92799], 00:16:07.518 | 99.00th=[116917], 99.50th=[120062], 99.90th=[126354], 99.95th=[127402], 00:16:07.518 | 99.99th=[129500] 00:16:07.518 bw ( KiB/s): min= 2936, max=48328, per=75.53%, avg=18078.90, stdev=12180.95, samples=29 00:16:07.518 iops : min= 734, max=12082, avg=4519.72, stdev=3045.24, samples=29 00:16:07.518 lat (usec) : 500=0.01%, 750=0.14%, 1000=0.73% 00:16:07.518 lat (msec) : 2=16.02%, 4=8.06%, 10=12.11%, 20=6.87%, 50=47.28% 00:16:07.518 lat (msec) : 100=6.05%, 250=2.59%, 500=0.13% 00:16:07.518 cpu : usr=99.23%, sys=0.08%, ctx=41, majf=0, minf=5529 00:16:07.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:07.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.518 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.518 issued rwts: total=65268,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.518 second_half: (groupid=0, jobs=1): err= 0: pid=84114: Tue Nov 19 11:47:17 2024 00:16:07.518 read: IOPS=2370, BW=9481KiB/s (9708kB/s)(255MiB/27490msec) 00:16:07.518 slat (usec): min=3, max=349, avg= 5.13, stdev= 2.82 00:16:07.518 clat (usec): min=661, max=342864, avg=36538.04, stdev=20812.37 00:16:07.518 lat (usec): min=666, max=342869, avg=36543.17, stdev=20812.39 00:16:07.518 clat percentiles (msec): 00:16:07.518 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:16:07.518 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:16:07.518 | 70.00th=[ 36], 80.00th=[ 40], 90.00th=[ 43], 95.00th=[ 49], 00:16:07.518 | 99.00th=[ 146], 99.50th=[ 188], 99.90th=[ 268], 99.95th=[ 292], 00:16:07.518 | 99.99th=[ 342] 00:16:07.518 write: IOPS=3376, BW=13.2MiB/s (13.8MB/s)(256MiB/19412msec); 0 zone resets 00:16:07.518 slat (usec): min=3, max=789, avg= 6.89, stdev= 7.72 00:16:07.518 clat (usec): min=365, max=130619, avg=17326.06, stdev=29749.38 00:16:07.518 lat (usec): min=371, max=130627, avg=17332.95, stdev=29749.48 00:16:07.518 clat percentiles (usec): 00:16:07.518 | 1.00th=[ 807], 5.00th=[ 1106], 10.00th=[ 1287], 20.00th=[ 1565], 00:16:07.518 | 30.00th=[ 1926], 40.00th=[ 2409], 50.00th=[ 4686], 60.00th=[ 7963], 00:16:07.518 | 70.00th=[ 11207], 80.00th=[ 14091], 90.00th=[ 80217], 95.00th=[ 93848], 00:16:07.518 | 99.00th=[115868], 99.50th=[120062], 99.90th=[126354], 99.95th=[126354], 00:16:07.518 | 99.99th=[129500] 00:16:07.518 bw ( KiB/s): min= 240, max=42056, per=81.13%, avg=19418.07, stdev=10721.09, samples=27 00:16:07.518 iops : min= 60, max=10514, avg=4854.52, stdev=2680.27, samples=27 00:16:07.518 lat (usec) : 500=0.01%, 750=0.27%, 1000=1.22% 00:16:07.518 lat (msec) : 2=14.51%, 4=8.65%, 10=8.79%, 20=10.39%, 50=47.20% 00:16:07.518 lat (msec) : 100=6.10%, 250=2.76%, 500=0.09% 00:16:07.518 cpu : usr=98.81%, sys=0.28%, ctx=127, majf=0, minf=5615 00:16:07.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:07.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.518 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.518 issued rwts: total=65156,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.518 00:16:07.518 Run status group 0 (all jobs): 00:16:07.518 READ: bw=18.5MiB/s (19.4MB/s), 9481KiB/s-9535KiB/s (9708kB/s-9764kB/s), io=509MiB (534MB), run=27379-27490msec 00:16:07.518 WRITE: bw=23.4MiB/s (24.5MB/s), 11.7MiB/s-13.2MiB/s (12.3MB/s-13.8MB/s), io=512MiB (537MB), run=19412-21905msec 00:16:07.518 ----------------------------------------------------- 00:16:07.518 Suppressions used: 00:16:07.518 count bytes template 00:16:07.518 2 10 /usr/src/fio/parse.c 00:16:07.518 3 288 /usr/src/fio/iolog.c 00:16:07.518 1 8 libtcmalloc_minimal.so 00:16:07.518 1 904 libcrypto.so 00:16:07.518 ----------------------------------------------------- 00:16:07.518 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.518 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:07.519 11:47:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:07.519 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:07.519 fio-3.35 00:16:07.519 Starting 1 thread 00:16:22.427 00:16:22.427 test: (groupid=0, jobs=1): err= 0: pid=84454: Tue Nov 19 11:47:34 2024 00:16:22.427 read: IOPS=7417, BW=29.0MiB/s (30.4MB/s)(255MiB/8790msec) 00:16:22.427 slat (nsec): min=3011, max=63685, avg=4802.40, stdev=1235.57 00:16:22.427 clat (usec): min=520, max=33022, avg=17247.60, stdev=2355.48 00:16:22.427 lat (usec): min=524, max=33026, avg=17252.40, stdev=2355.48 00:16:22.427 clat percentiles (usec): 00:16:22.427 | 1.00th=[14877], 5.00th=[15139], 10.00th=[15401], 20.00th=[15664], 00:16:22.427 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16712], 00:16:22.427 | 70.00th=[17433], 80.00th=[19006], 90.00th=[20579], 95.00th=[22152], 00:16:22.427 | 99.00th=[25035], 99.50th=[26346], 99.90th=[28443], 99.95th=[29754], 00:16:22.427 | 99.99th=[32375] 00:16:22.428 write: IOPS=10.3k, BW=40.2MiB/s (42.1MB/s)(256MiB/6370msec); 0 zone resets 00:16:22.428 slat (usec): min=4, max=441, avg= 7.96, stdev= 4.26 00:16:22.428 clat (usec): min=543, max=62064, avg=12371.49, stdev=14673.59 00:16:22.428 lat (usec): min=549, max=62069, avg=12379.45, stdev=14673.57 00:16:22.428 clat percentiles (usec): 00:16:22.428 | 1.00th=[ 865], 5.00th=[ 1139], 10.00th=[ 1303], 20.00th=[ 1598], 00:16:22.428 | 30.00th=[ 1942], 40.00th=[ 2704], 50.00th=[ 7177], 60.00th=[ 9896], 00:16:22.428 | 70.00th=[12911], 80.00th=[16581], 90.00th=[41157], 95.00th=[46400], 00:16:22.428 | 99.00th=[52691], 99.50th=[54264], 99.90th=[58983], 99.95th=[59507], 00:16:22.428 | 99.99th=[61604] 00:16:22.428 bw ( KiB/s): min=29800, max=67736, per=98.00%, avg=40329.85, stdev=9987.93, samples=13 00:16:22.428 iops : min= 7450, max=16934, avg=10082.46, stdev=2496.98, samples=13 00:16:22.428 lat (usec) : 750=0.18%, 1000=1.06% 00:16:22.428 lat (msec) : 2=14.62%, 4=4.95%, 10=9.59%, 20=54.67%, 50=13.92% 00:16:22.428 lat (msec) : 100=1.00% 00:16:22.428 cpu : usr=98.90%, sys=0.26%, ctx=37, majf=0, minf=5577 00:16:22.428 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:22.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.428 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.428 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.428 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.428 00:16:22.428 Run status group 0 (all jobs): 00:16:22.428 READ: bw=29.0MiB/s (30.4MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=255MiB (267MB), run=8790-8790msec 00:16:22.428 WRITE: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=256MiB (268MB), run=6370-6370msec 00:16:22.428 ----------------------------------------------------- 00:16:22.428 Suppressions used: 00:16:22.428 count bytes template 00:16:22.428 1 5 /usr/src/fio/parse.c 00:16:22.428 2 192 /usr/src/fio/iolog.c 00:16:22.428 1 8 libtcmalloc_minimal.so 00:16:22.428 1 904 libcrypto.so 00:16:22.428 ----------------------------------------------------- 00:16:22.428 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:22.428 Remove shared memory files 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69401 /dev/shm/spdk_tgt_trace.pid82804 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:22.428 00:16:22.428 real 1m1.781s 00:16:22.428 user 2m19.461s 00:16:22.428 sys 0m3.118s 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.428 ************************************ 00:16:22.428 END TEST ftl_fio_basic 00:16:22.428 ************************************ 00:16:22.428 11:47:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:22.428 11:47:35 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:22.428 11:47:35 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:22.428 11:47:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.428 11:47:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:22.428 ************************************ 00:16:22.428 START TEST ftl_bdevperf 00:16:22.428 ************************************ 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:22.428 * Looking for test storage... 00:16:22.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:22.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.428 --rc genhtml_branch_coverage=1 00:16:22.428 --rc genhtml_function_coverage=1 00:16:22.428 --rc genhtml_legend=1 00:16:22.428 --rc geninfo_all_blocks=1 00:16:22.428 --rc geninfo_unexecuted_blocks=1 00:16:22.428 00:16:22.428 ' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:22.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.428 --rc genhtml_branch_coverage=1 00:16:22.428 --rc genhtml_function_coverage=1 00:16:22.428 --rc genhtml_legend=1 00:16:22.428 --rc geninfo_all_blocks=1 00:16:22.428 --rc geninfo_unexecuted_blocks=1 00:16:22.428 00:16:22.428 ' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:22.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.428 --rc genhtml_branch_coverage=1 00:16:22.428 --rc genhtml_function_coverage=1 00:16:22.428 --rc genhtml_legend=1 00:16:22.428 --rc geninfo_all_blocks=1 00:16:22.428 --rc geninfo_unexecuted_blocks=1 00:16:22.428 00:16:22.428 ' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:22.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:22.428 --rc genhtml_branch_coverage=1 00:16:22.428 --rc genhtml_function_coverage=1 00:16:22.428 --rc genhtml_legend=1 00:16:22.428 --rc geninfo_all_blocks=1 00:16:22.428 --rc geninfo_unexecuted_blocks=1 00:16:22.428 00:16:22.428 ' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:22.428 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84710 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84710 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84710 ']' 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:22.429 11:47:35 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:22.689 [2024-11-19 11:47:35.897669] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:22.689 [2024-11-19 11:47:35.897817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84710 ] 00:16:22.689 [2024-11-19 11:47:36.032586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.949 [2024-11-19 11:47:36.106338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:23.518 11:47:36 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:23.778 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:24.038 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:24.038 { 00:16:24.038 "name": "nvme0n1", 00:16:24.038 "aliases": [ 00:16:24.038 "af9ff7c6-e8dd-4c42-9bec-a5c8efbd4028" 00:16:24.038 ], 00:16:24.038 "product_name": "NVMe disk", 00:16:24.038 "block_size": 4096, 00:16:24.038 "num_blocks": 1310720, 00:16:24.038 "uuid": "af9ff7c6-e8dd-4c42-9bec-a5c8efbd4028", 00:16:24.038 "numa_id": -1, 00:16:24.038 "assigned_rate_limits": { 00:16:24.038 "rw_ios_per_sec": 0, 00:16:24.038 "rw_mbytes_per_sec": 0, 00:16:24.038 "r_mbytes_per_sec": 0, 00:16:24.038 "w_mbytes_per_sec": 0 00:16:24.038 }, 00:16:24.038 "claimed": true, 00:16:24.038 "claim_type": "read_many_write_one", 00:16:24.038 "zoned": false, 00:16:24.038 "supported_io_types": { 00:16:24.038 "read": true, 00:16:24.038 "write": true, 00:16:24.038 "unmap": true, 00:16:24.038 "flush": true, 00:16:24.038 "reset": true, 00:16:24.038 "nvme_admin": true, 00:16:24.038 "nvme_io": true, 00:16:24.038 "nvme_io_md": false, 00:16:24.038 "write_zeroes": true, 00:16:24.038 "zcopy": false, 00:16:24.038 "get_zone_info": false, 00:16:24.038 "zone_management": false, 00:16:24.038 "zone_append": false, 00:16:24.038 "compare": true, 00:16:24.038 "compare_and_write": false, 00:16:24.038 "abort": true, 00:16:24.038 "seek_hole": false, 00:16:24.038 "seek_data": false, 00:16:24.038 "copy": true, 00:16:24.038 "nvme_iov_md": false 00:16:24.038 }, 00:16:24.038 "driver_specific": { 00:16:24.038 "nvme": [ 00:16:24.038 { 00:16:24.038 "pci_address": "0000:00:11.0", 00:16:24.038 "trid": { 00:16:24.038 "trtype": "PCIe", 00:16:24.038 "traddr": "0000:00:11.0" 00:16:24.038 }, 00:16:24.038 "ctrlr_data": { 00:16:24.038 "cntlid": 0, 00:16:24.038 "vendor_id": "0x1b36", 00:16:24.038 "model_number": "QEMU NVMe Ctrl", 00:16:24.038 "serial_number": "12341", 00:16:24.038 "firmware_revision": "8.0.0", 00:16:24.038 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:24.038 "oacs": { 00:16:24.038 "security": 0, 00:16:24.038 "format": 1, 00:16:24.038 "firmware": 0, 00:16:24.038 "ns_manage": 1 00:16:24.038 }, 00:16:24.038 "multi_ctrlr": false, 00:16:24.038 "ana_reporting": false 00:16:24.038 }, 00:16:24.038 "vs": { 00:16:24.038 "nvme_version": "1.4" 00:16:24.038 }, 00:16:24.038 "ns_data": { 00:16:24.038 "id": 1, 00:16:24.038 "can_share": false 00:16:24.038 } 00:16:24.038 } 00:16:24.038 ], 00:16:24.038 "mp_policy": "active_passive" 00:16:24.038 } 00:16:24.038 } 00:16:24.038 ]' 00:16:24.038 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:24.038 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:24.038 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:24.038 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.039 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:24.298 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=6f7311f6-86c9-4b65-9f9f-479731b8c7aa 00:16:24.298 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:24.298 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f7311f6-86c9-4b65-9f9f-479731b8c7aa 00:16:24.558 11:47:37 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:24.819 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=50e76d5a-2557-477e-a6d0-091e1885f989 00:16:24.819 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 50e76d5a-2557-477e-a6d0-091e1885f989 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:25.079 { 00:16:25.079 "name": "c9182cb3-3651-42ab-969e-37289a0cc4b7", 00:16:25.079 "aliases": [ 00:16:25.079 "lvs/nvme0n1p0" 00:16:25.079 ], 00:16:25.079 "product_name": "Logical Volume", 00:16:25.079 "block_size": 4096, 00:16:25.079 "num_blocks": 26476544, 00:16:25.079 "uuid": "c9182cb3-3651-42ab-969e-37289a0cc4b7", 00:16:25.079 "assigned_rate_limits": { 00:16:25.079 "rw_ios_per_sec": 0, 00:16:25.079 "rw_mbytes_per_sec": 0, 00:16:25.079 "r_mbytes_per_sec": 0, 00:16:25.079 "w_mbytes_per_sec": 0 00:16:25.079 }, 00:16:25.079 "claimed": false, 00:16:25.079 "zoned": false, 00:16:25.079 "supported_io_types": { 00:16:25.079 "read": true, 00:16:25.079 "write": true, 00:16:25.079 "unmap": true, 00:16:25.079 "flush": false, 00:16:25.079 "reset": true, 00:16:25.079 "nvme_admin": false, 00:16:25.079 "nvme_io": false, 00:16:25.079 "nvme_io_md": false, 00:16:25.079 "write_zeroes": true, 00:16:25.079 "zcopy": false, 00:16:25.079 "get_zone_info": false, 00:16:25.079 "zone_management": false, 00:16:25.079 "zone_append": false, 00:16:25.079 "compare": false, 00:16:25.079 "compare_and_write": false, 00:16:25.079 "abort": false, 00:16:25.079 "seek_hole": true, 00:16:25.079 "seek_data": true, 00:16:25.079 "copy": false, 00:16:25.079 "nvme_iov_md": false 00:16:25.079 }, 00:16:25.079 "driver_specific": { 00:16:25.079 "lvol": { 00:16:25.079 "lvol_store_uuid": "50e76d5a-2557-477e-a6d0-091e1885f989", 00:16:25.079 "base_bdev": "nvme0n1", 00:16:25.079 "thin_provision": true, 00:16:25.079 "num_allocated_clusters": 0, 00:16:25.079 "snapshot": false, 00:16:25.079 "clone": false, 00:16:25.079 "esnap_clone": false 00:16:25.079 } 00:16:25.079 } 00:16:25.079 } 00:16:25.079 ]' 00:16:25.079 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:25.340 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:25.599 11:47:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:25.599 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:25.599 { 00:16:25.599 "name": "c9182cb3-3651-42ab-969e-37289a0cc4b7", 00:16:25.599 "aliases": [ 00:16:25.599 "lvs/nvme0n1p0" 00:16:25.599 ], 00:16:25.599 "product_name": "Logical Volume", 00:16:25.599 "block_size": 4096, 00:16:25.599 "num_blocks": 26476544, 00:16:25.599 "uuid": "c9182cb3-3651-42ab-969e-37289a0cc4b7", 00:16:25.599 "assigned_rate_limits": { 00:16:25.599 "rw_ios_per_sec": 0, 00:16:25.599 "rw_mbytes_per_sec": 0, 00:16:25.599 "r_mbytes_per_sec": 0, 00:16:25.599 "w_mbytes_per_sec": 0 00:16:25.599 }, 00:16:25.599 "claimed": false, 00:16:25.599 "zoned": false, 00:16:25.599 "supported_io_types": { 00:16:25.599 "read": true, 00:16:25.599 "write": true, 00:16:25.599 "unmap": true, 00:16:25.599 "flush": false, 00:16:25.599 "reset": true, 00:16:25.599 "nvme_admin": false, 00:16:25.599 "nvme_io": false, 00:16:25.599 "nvme_io_md": false, 00:16:25.599 "write_zeroes": true, 00:16:25.599 "zcopy": false, 00:16:25.599 "get_zone_info": false, 00:16:25.599 "zone_management": false, 00:16:25.599 "zone_append": false, 00:16:25.599 "compare": false, 00:16:25.599 "compare_and_write": false, 00:16:25.599 "abort": false, 00:16:25.599 "seek_hole": true, 00:16:25.599 "seek_data": true, 00:16:25.599 "copy": false, 00:16:25.599 "nvme_iov_md": false 00:16:25.599 }, 00:16:25.599 "driver_specific": { 00:16:25.599 "lvol": { 00:16:25.599 "lvol_store_uuid": "50e76d5a-2557-477e-a6d0-091e1885f989", 00:16:25.599 "base_bdev": "nvme0n1", 00:16:25.599 "thin_provision": true, 00:16:25.599 "num_allocated_clusters": 0, 00:16:25.599 "snapshot": false, 00:16:25.599 "clone": false, 00:16:25.599 "esnap_clone": false 00:16:25.599 } 00:16:25.599 } 00:16:25.599 } 00:16:25.599 ]' 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:25.857 11:47:39 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9182cb3-3651-42ab-969e-37289a0cc4b7 00:16:26.114 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:26.114 { 00:16:26.114 "name": "c9182cb3-3651-42ab-969e-37289a0cc4b7", 00:16:26.114 "aliases": [ 00:16:26.114 "lvs/nvme0n1p0" 00:16:26.114 ], 00:16:26.114 "product_name": "Logical Volume", 00:16:26.114 "block_size": 4096, 00:16:26.114 "num_blocks": 26476544, 00:16:26.114 "uuid": "c9182cb3-3651-42ab-969e-37289a0cc4b7", 00:16:26.114 "assigned_rate_limits": { 00:16:26.114 "rw_ios_per_sec": 0, 00:16:26.114 "rw_mbytes_per_sec": 0, 00:16:26.114 "r_mbytes_per_sec": 0, 00:16:26.114 "w_mbytes_per_sec": 0 00:16:26.114 }, 00:16:26.114 "claimed": false, 00:16:26.114 "zoned": false, 00:16:26.114 "supported_io_types": { 00:16:26.114 "read": true, 00:16:26.114 "write": true, 00:16:26.114 "unmap": true, 00:16:26.114 "flush": false, 00:16:26.114 "reset": true, 00:16:26.114 "nvme_admin": false, 00:16:26.114 "nvme_io": false, 00:16:26.115 "nvme_io_md": false, 00:16:26.115 "write_zeroes": true, 00:16:26.115 "zcopy": false, 00:16:26.115 "get_zone_info": false, 00:16:26.115 "zone_management": false, 00:16:26.115 "zone_append": false, 00:16:26.115 "compare": false, 00:16:26.115 "compare_and_write": false, 00:16:26.115 "abort": false, 00:16:26.115 "seek_hole": true, 00:16:26.115 "seek_data": true, 00:16:26.115 "copy": false, 00:16:26.115 "nvme_iov_md": false 00:16:26.115 }, 00:16:26.115 "driver_specific": { 00:16:26.115 "lvol": { 00:16:26.115 "lvol_store_uuid": "50e76d5a-2557-477e-a6d0-091e1885f989", 00:16:26.115 "base_bdev": "nvme0n1", 00:16:26.115 "thin_provision": true, 00:16:26.115 "num_allocated_clusters": 0, 00:16:26.115 "snapshot": false, 00:16:26.115 "clone": false, 00:16:26.115 "esnap_clone": false 00:16:26.115 } 00:16:26.115 } 00:16:26.115 } 00:16:26.115 ]' 00:16:26.115 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:26.115 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:26.115 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:26.375 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:26.375 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:26.375 11:47:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:26.375 11:47:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:26.375 11:47:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c9182cb3-3651-42ab-969e-37289a0cc4b7 -c nvc0n1p0 --l2p_dram_limit 20 00:16:26.375 [2024-11-19 11:47:39.711440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.375 [2024-11-19 11:47:39.711484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:26.375 [2024-11-19 11:47:39.711498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:26.375 [2024-11-19 11:47:39.711506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.375 [2024-11-19 11:47:39.711547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.375 [2024-11-19 11:47:39.711557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:26.375 [2024-11-19 11:47:39.711568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:26.375 [2024-11-19 11:47:39.711573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.375 [2024-11-19 11:47:39.711588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:26.375 [2024-11-19 11:47:39.711774] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:26.375 [2024-11-19 11:47:39.711789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.375 [2024-11-19 11:47:39.711798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:26.375 [2024-11-19 11:47:39.711813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:26.375 [2024-11-19 11:47:39.711819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.375 [2024-11-19 11:47:39.711845] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f430682e-1a90-4ca7-96a2-0d361c1b2961 00:16:26.375 [2024-11-19 11:47:39.713133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.375 [2024-11-19 11:47:39.713163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:26.375 [2024-11-19 11:47:39.713171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:26.376 [2024-11-19 11:47:39.713179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.720124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.720151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:26.376 [2024-11-19 11:47:39.720159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:16:26.376 [2024-11-19 11:47:39.720169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.720226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.720234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:26.376 [2024-11-19 11:47:39.720241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:26.376 [2024-11-19 11:47:39.720248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.720284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.720297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:26.376 [2024-11-19 11:47:39.720304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:26.376 [2024-11-19 11:47:39.720311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.720330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:26.376 [2024-11-19 11:47:39.722023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.722048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:26.376 [2024-11-19 11:47:39.722058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:16:26.376 [2024-11-19 11:47:39.722064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.722092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.722099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:26.376 [2024-11-19 11:47:39.722111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:26.376 [2024-11-19 11:47:39.722119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.722132] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:26.376 [2024-11-19 11:47:39.722248] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:26.376 [2024-11-19 11:47:39.722261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:26.376 [2024-11-19 11:47:39.722272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:26.376 [2024-11-19 11:47:39.722282] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722297] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:26.376 [2024-11-19 11:47:39.722303] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:26.376 [2024-11-19 11:47:39.722311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:26.376 [2024-11-19 11:47:39.722316] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:26.376 [2024-11-19 11:47:39.722324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.722329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:26.376 [2024-11-19 11:47:39.722339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:26.376 [2024-11-19 11:47:39.722345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.722421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.376 [2024-11-19 11:47:39.722431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:26.376 [2024-11-19 11:47:39.722441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:26.376 [2024-11-19 11:47:39.722447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.376 [2024-11-19 11:47:39.722524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:26.376 [2024-11-19 11:47:39.722533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:26.376 [2024-11-19 11:47:39.722541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:26.376 [2024-11-19 11:47:39.722563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:26.376 [2024-11-19 11:47:39.722588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.376 [2024-11-19 11:47:39.722600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:26.376 [2024-11-19 11:47:39.722605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:26.376 [2024-11-19 11:47:39.722613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:26.376 [2024-11-19 11:47:39.722620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:26.376 [2024-11-19 11:47:39.722628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:26.376 [2024-11-19 11:47:39.722632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:26.376 [2024-11-19 11:47:39.722645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:26.376 [2024-11-19 11:47:39.722663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:26.376 [2024-11-19 11:47:39.722684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:26.376 [2024-11-19 11:47:39.722704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:26.376 [2024-11-19 11:47:39.722725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:26.376 [2024-11-19 11:47:39.722747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.376 [2024-11-19 11:47:39.722760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:26.376 [2024-11-19 11:47:39.722766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:26.376 [2024-11-19 11:47:39.722773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:26.376 [2024-11-19 11:47:39.722780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:26.376 [2024-11-19 11:47:39.722788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:26.376 [2024-11-19 11:47:39.722795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:26.376 [2024-11-19 11:47:39.722809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:26.376 [2024-11-19 11:47:39.722816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722823] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:26.376 [2024-11-19 11:47:39.722835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:26.376 [2024-11-19 11:47:39.722843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:26.376 [2024-11-19 11:47:39.722859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:26.376 [2024-11-19 11:47:39.722867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:26.376 [2024-11-19 11:47:39.722874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:26.376 [2024-11-19 11:47:39.722881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:26.376 [2024-11-19 11:47:39.722888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:26.376 [2024-11-19 11:47:39.722896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:26.376 [2024-11-19 11:47:39.722905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:26.376 [2024-11-19 11:47:39.722915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.376 [2024-11-19 11:47:39.722923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:26.376 [2024-11-19 11:47:39.722933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:26.376 [2024-11-19 11:47:39.722939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:26.376 [2024-11-19 11:47:39.722947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:26.376 [2024-11-19 11:47:39.722953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:26.376 [2024-11-19 11:47:39.722963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:26.376 [2024-11-19 11:47:39.722970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:26.376 [2024-11-19 11:47:39.722983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:26.377 [2024-11-19 11:47:39.722990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:26.377 [2024-11-19 11:47:39.722999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:26.377 [2024-11-19 11:47:39.723006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:26.377 [2024-11-19 11:47:39.723013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:26.377 [2024-11-19 11:47:39.723020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:26.377 [2024-11-19 11:47:39.723028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:26.377 [2024-11-19 11:47:39.723035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:26.377 [2024-11-19 11:47:39.723045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:26.377 [2024-11-19 11:47:39.723054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:26.377 [2024-11-19 11:47:39.723063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:26.377 [2024-11-19 11:47:39.723070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:26.377 [2024-11-19 11:47:39.723077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:26.377 [2024-11-19 11:47:39.723083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:26.377 [2024-11-19 11:47:39.723093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:26.377 [2024-11-19 11:47:39.723102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:16:26.377 [2024-11-19 11:47:39.723110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:26.377 [2024-11-19 11:47:39.723137] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:26.377 [2024-11-19 11:47:39.723148] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:30.660 [2024-11-19 11:47:44.002329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.002390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:30.660 [2024-11-19 11:47:44.002404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4279.159 ms 00:16:30.660 [2024-11-19 11:47:44.002424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.021901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.021953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.660 [2024-11-19 11:47:44.021965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.395 ms 00:16:30.660 [2024-11-19 11:47:44.021975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.022053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.022063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:30.660 [2024-11-19 11:47:44.022073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:30.660 [2024-11-19 11:47:44.022080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.032864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.032914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.660 [2024-11-19 11:47:44.032931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.744 ms 00:16:30.660 [2024-11-19 11:47:44.032943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.032973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.032986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.660 [2024-11-19 11:47:44.032997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:30.660 [2024-11-19 11:47:44.033008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.033518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.033550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.660 [2024-11-19 11:47:44.033564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:16:30.660 [2024-11-19 11:47:44.033581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.033731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.033746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.660 [2024-11-19 11:47:44.033758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:16:30.660 [2024-11-19 11:47:44.033772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.039662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.039692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.660 [2024-11-19 11:47:44.039701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.866 ms 00:16:30.660 [2024-11-19 11:47:44.039709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.660 [2024-11-19 11:47:44.047252] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:30.660 [2024-11-19 11:47:44.052864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.660 [2024-11-19 11:47:44.052892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:30.660 [2024-11-19 11:47:44.052903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:16:30.660 [2024-11-19 11:47:44.052909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.123025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.123059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:30.918 [2024-11-19 11:47:44.123074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.094 ms 00:16:30.918 [2024-11-19 11:47:44.123081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.123232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.123242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:30.918 [2024-11-19 11:47:44.123254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:16:30.918 [2024-11-19 11:47:44.123264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.127250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.127281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:30.918 [2024-11-19 11:47:44.127291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.959 ms 00:16:30.918 [2024-11-19 11:47:44.127298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.130190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.130217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:30.918 [2024-11-19 11:47:44.130227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:16:30.918 [2024-11-19 11:47:44.130233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.130506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.130516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:30.918 [2024-11-19 11:47:44.130530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:16:30.918 [2024-11-19 11:47:44.130536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.163159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.163191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:30.918 [2024-11-19 11:47:44.163202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.597 ms 00:16:30.918 [2024-11-19 11:47:44.163209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.168126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.168160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:30.918 [2024-11-19 11:47:44.168181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:16:30.918 [2024-11-19 11:47:44.168193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.171811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.171837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:30.918 [2024-11-19 11:47:44.171846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.579 ms 00:16:30.918 [2024-11-19 11:47:44.171851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.176018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.176053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:30.918 [2024-11-19 11:47:44.176064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:16:30.918 [2024-11-19 11:47:44.176070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.176106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.918 [2024-11-19 11:47:44.176113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:30.918 [2024-11-19 11:47:44.176124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:30.918 [2024-11-19 11:47:44.176130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.918 [2024-11-19 11:47:44.176190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.919 [2024-11-19 11:47:44.176196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:30.919 [2024-11-19 11:47:44.176204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:16:30.919 [2024-11-19 11:47:44.176210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.919 [2024-11-19 11:47:44.177327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4465.461 ms, result 0 00:16:30.919 { 00:16:30.919 "name": "ftl0", 00:16:30.919 "uuid": "f430682e-1a90-4ca7-96a2-0d361c1b2961" 00:16:30.919 } 00:16:30.919 11:47:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:30.919 11:47:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:30.919 11:47:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:31.177 11:47:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:31.177 [2024-11-19 11:47:44.491525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:31.177 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:31.177 Zero copy mechanism will not be used. 00:16:31.177 Running I/O for 4 seconds... 00:16:33.491 753.00 IOPS, 50.00 MiB/s [2024-11-19T11:47:47.838Z] 769.00 IOPS, 51.07 MiB/s [2024-11-19T11:47:48.772Z] 773.67 IOPS, 51.38 MiB/s [2024-11-19T11:47:48.772Z] 776.75 IOPS, 51.58 MiB/s 00:16:35.360 Latency(us) 00:16:35.360 [2024-11-19T11:47:48.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.360 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:35.360 ftl0 : 4.00 776.75 51.58 0.00 0.00 1348.56 277.27 2533.22 00:16:35.360 [2024-11-19T11:47:48.772Z] =================================================================================================================== 00:16:35.360 [2024-11-19T11:47:48.772Z] Total : 776.75 51.58 0.00 0.00 1348.56 277.27 2533.22 00:16:35.360 [2024-11-19 11:47:48.497571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:35.360 { 00:16:35.360 "results": [ 00:16:35.360 { 00:16:35.360 "job": "ftl0", 00:16:35.360 "core_mask": "0x1", 00:16:35.360 "workload": "randwrite", 00:16:35.360 "status": "finished", 00:16:35.360 "queue_depth": 1, 00:16:35.360 "io_size": 69632, 00:16:35.360 "runtime": 4.001293, 00:16:35.360 "iops": 776.7489159129311, 00:16:35.360 "mibps": 51.58098269734308, 00:16:35.360 "io_failed": 0, 00:16:35.360 "io_timeout": 0, 00:16:35.360 "avg_latency_us": 1348.5609108009107, 00:16:35.360 "min_latency_us": 277.2676923076923, 00:16:35.360 "max_latency_us": 2533.2184615384617 00:16:35.360 } 00:16:35.360 ], 00:16:35.360 "core_count": 1 00:16:35.360 } 00:16:35.360 11:47:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:35.360 [2024-11-19 11:47:48.602142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:35.360 Running I/O for 4 seconds... 00:16:37.237 7256.00 IOPS, 28.34 MiB/s [2024-11-19T11:47:52.031Z] 6310.50 IOPS, 24.65 MiB/s [2024-11-19T11:47:52.972Z] 6102.33 IOPS, 23.84 MiB/s [2024-11-19T11:47:52.972Z] 5995.25 IOPS, 23.42 MiB/s 00:16:39.560 Latency(us) 00:16:39.560 [2024-11-19T11:47:52.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.560 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.560 ftl0 : 4.03 5985.07 23.38 0.00 0.00 21318.23 378.09 41539.74 00:16:39.560 [2024-11-19T11:47:52.972Z] =================================================================================================================== 00:16:39.560 [2024-11-19T11:47:52.972Z] Total : 5985.07 23.38 0.00 0.00 21318.23 0.00 41539.74 00:16:39.560 [2024-11-19 11:47:52.635108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:39.560 { 00:16:39.560 "results": [ 00:16:39.560 { 00:16:39.560 "job": "ftl0", 00:16:39.560 "core_mask": "0x1", 00:16:39.560 "workload": "randwrite", 00:16:39.560 "status": "finished", 00:16:39.560 "queue_depth": 128, 00:16:39.560 "io_size": 4096, 00:16:39.560 "runtime": 4.026517, 00:16:39.560 "iops": 5985.073451819525, 00:16:39.560 "mibps": 23.37919317117002, 00:16:39.560 "io_failed": 0, 00:16:39.560 "io_timeout": 0, 00:16:39.560 "avg_latency_us": 21318.226042702063, 00:16:39.560 "min_latency_us": 378.0923076923077, 00:16:39.560 "max_latency_us": 41539.74153846154 00:16:39.560 } 00:16:39.560 ], 00:16:39.560 "core_count": 1 00:16:39.560 } 00:16:39.560 11:47:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:39.560 [2024-11-19 11:47:52.749846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:39.560 Running I/O for 4 seconds... 00:16:41.441 5011.00 IOPS, 19.57 MiB/s [2024-11-19T11:47:55.796Z] 5504.00 IOPS, 21.50 MiB/s [2024-11-19T11:47:57.176Z] 5347.33 IOPS, 20.89 MiB/s [2024-11-19T11:47:57.176Z] 5282.50 IOPS, 20.63 MiB/s 00:16:43.764 Latency(us) 00:16:43.764 [2024-11-19T11:47:57.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.764 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:43.764 Verification LBA range: start 0x0 length 0x1400000 00:16:43.764 ftl0 : 4.01 5294.60 20.68 0.00 0.00 24104.79 322.95 108890.58 00:16:43.764 [2024-11-19T11:47:57.176Z] =================================================================================================================== 00:16:43.764 [2024-11-19T11:47:57.176Z] Total : 5294.60 20.68 0.00 0.00 24104.79 0.00 108890.58 00:16:43.764 { 00:16:43.764 "results": [ 00:16:43.764 { 00:16:43.764 "job": "ftl0", 00:16:43.764 "core_mask": "0x1", 00:16:43.764 "workload": "verify", 00:16:43.764 "status": "finished", 00:16:43.764 "verify_range": { 00:16:43.764 "start": 0, 00:16:43.764 "length": 20971520 00:16:43.764 }, 00:16:43.764 "queue_depth": 128, 00:16:43.764 "io_size": 4096, 00:16:43.764 "runtime": 4.01296, 00:16:43.764 "iops": 5294.5955105458315, 00:16:43.764 "mibps": 20.682013713069654, 00:16:43.764 "io_failed": 0, 00:16:43.764 "io_timeout": 0, 00:16:43.764 "avg_latency_us": 24104.791280289344, 00:16:43.764 "min_latency_us": 322.95384615384614, 00:16:43.764 "max_latency_us": 108890.58461538462 00:16:43.764 } 00:16:43.764 ], 00:16:43.764 "core_count": 1 00:16:43.764 } 00:16:43.764 [2024-11-19 11:47:56.772210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:43.764 11:47:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:43.764 [2024-11-19 11:47:56.984670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.764 [2024-11-19 11:47:56.984736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.764 [2024-11-19 11:47:56.984754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.764 [2024-11-19 11:47:56.984763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.764 [2024-11-19 11:47:56.984790] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.764 [2024-11-19 11:47:56.985771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.764 [2024-11-19 11:47:56.985825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.764 [2024-11-19 11:47:56.985838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:16:43.764 [2024-11-19 11:47:56.985855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.764 [2024-11-19 11:47:56.988947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.764 [2024-11-19 11:47:56.989002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.764 [2024-11-19 11:47:56.989015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:16:43.764 [2024-11-19 11:47:56.989030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.027 [2024-11-19 11:47:57.213682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.027 [2024-11-19 11:47:57.213743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:44.027 [2024-11-19 11:47:57.213756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 224.631 ms 00:16:44.027 [2024-11-19 11:47:57.213768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.027 [2024-11-19 11:47:57.219955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.027 [2024-11-19 11:47:57.220007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:44.027 [2024-11-19 11:47:57.220019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.137 ms 00:16:44.027 [2024-11-19 11:47:57.220031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.027 [2024-11-19 11:47:57.223261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.027 [2024-11-19 11:47:57.223324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:44.027 [2024-11-19 11:47:57.223336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:16:44.027 [2024-11-19 11:47:57.223347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.027 [2024-11-19 11:47:57.230989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.027 [2024-11-19 11:47:57.231051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:44.027 [2024-11-19 11:47:57.231064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.593 ms 00:16:44.027 [2024-11-19 11:47:57.231084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.027 [2024-11-19 11:47:57.231221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.027 [2024-11-19 11:47:57.231254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:44.027 [2024-11-19 11:47:57.231265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:44.028 [2024-11-19 11:47:57.231278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.028 [2024-11-19 11:47:57.234895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.028 [2024-11-19 11:47:57.234956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:44.028 [2024-11-19 11:47:57.234967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:16:44.028 [2024-11-19 11:47:57.234977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.028 [2024-11-19 11:47:57.238124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.028 [2024-11-19 11:47:57.238183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:44.028 [2024-11-19 11:47:57.238193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:16:44.028 [2024-11-19 11:47:57.238205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.028 [2024-11-19 11:47:57.240881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.028 [2024-11-19 11:47:57.240942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:44.028 [2024-11-19 11:47:57.240952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:16:44.028 [2024-11-19 11:47:57.240968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.028 [2024-11-19 11:47:57.243480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.028 [2024-11-19 11:47:57.243547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:44.028 [2024-11-19 11:47:57.243557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:16:44.028 [2024-11-19 11:47:57.243568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.028 [2024-11-19 11:47:57.243614] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:44.028 [2024-11-19 11:47:57.243635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.243999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:44.028 [2024-11-19 11:47:57.244429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:44.029 [2024-11-19 11:47:57.244727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:44.029 [2024-11-19 11:47:57.244744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f430682e-1a90-4ca7-96a2-0d361c1b2961 00:16:44.029 [2024-11-19 11:47:57.244756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:44.029 [2024-11-19 11:47:57.244769] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:44.029 [2024-11-19 11:47:57.244780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:44.029 [2024-11-19 11:47:57.244789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:44.029 [2024-11-19 11:47:57.244806] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:44.029 [2024-11-19 11:47:57.244815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:44.029 [2024-11-19 11:47:57.244836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:44.029 [2024-11-19 11:47:57.244843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:44.029 [2024-11-19 11:47:57.244856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:44.029 [2024-11-19 11:47:57.244865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.029 [2024-11-19 11:47:57.244877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:44.029 [2024-11-19 11:47:57.244888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:16:44.029 [2024-11-19 11:47:57.244904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.247984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.029 [2024-11-19 11:47:57.248036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:44.029 [2024-11-19 11:47:57.248048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:16:44.029 [2024-11-19 11:47:57.248059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.248247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.029 [2024-11-19 11:47:57.248261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:44.029 [2024-11-19 11:47:57.248271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:16:44.029 [2024-11-19 11:47:57.248292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.258219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.258280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.029 [2024-11-19 11:47:57.258291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.258310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.258381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.258393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.029 [2024-11-19 11:47:57.258426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.258439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.258558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.258575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.029 [2024-11-19 11:47:57.258585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.258597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.258614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.258630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.029 [2024-11-19 11:47:57.258639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.258659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.277316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.277374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.029 [2024-11-19 11:47:57.277386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.277398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.292832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.292892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.029 [2024-11-19 11:47:57.292905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.292917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.293030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.029 [2024-11-19 11:47:57.293041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.293053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.293116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.029 [2024-11-19 11:47:57.293124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.293139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.293240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.029 [2024-11-19 11:47:57.293251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.293262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.293308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:44.029 [2024-11-19 11:47:57.293316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.293328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.293392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.029 [2024-11-19 11:47:57.293429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.293443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:44.029 [2024-11-19 11:47:57.293582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.029 [2024-11-19 11:47:57.293592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:44.029 [2024-11-19 11:47:57.293609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.029 [2024-11-19 11:47:57.293781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 309.063 ms, result 0 00:16:44.029 true 00:16:44.029 11:47:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84710 00:16:44.029 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84710 ']' 00:16:44.029 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84710 00:16:44.029 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:44.029 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.030 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84710 00:16:44.030 killing process with pid 84710 00:16:44.030 Received shutdown signal, test time was about 4.000000 seconds 00:16:44.030 00:16:44.030 Latency(us) 00:16:44.030 [2024-11-19T11:47:57.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.030 [2024-11-19T11:47:57.442Z] =================================================================================================================== 00:16:44.030 [2024-11-19T11:47:57.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.030 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.030 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.030 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84710' 00:16:44.030 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84710 00:16:44.030 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84710 00:16:44.291 Remove shared memory files 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:44.291 ************************************ 00:16:44.291 END TEST ftl_bdevperf 00:16:44.291 ************************************ 00:16:44.291 00:16:44.291 real 0m21.989s 00:16:44.291 user 0m24.612s 00:16:44.291 sys 0m0.976s 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.291 11:47:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:44.553 11:47:57 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:44.553 11:47:57 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:44.553 11:47:57 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.553 11:47:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:44.553 ************************************ 00:16:44.553 START TEST ftl_trim 00:16:44.553 ************************************ 00:16:44.553 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:44.553 * Looking for test storage... 00:16:44.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:44.553 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:44.553 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:44.553 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:44.553 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.553 11:47:57 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.554 11:47:57 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:44.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.554 --rc genhtml_branch_coverage=1 00:16:44.554 --rc genhtml_function_coverage=1 00:16:44.554 --rc genhtml_legend=1 00:16:44.554 --rc geninfo_all_blocks=1 00:16:44.554 --rc geninfo_unexecuted_blocks=1 00:16:44.554 00:16:44.554 ' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85056 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85056 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85056 ']' 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.554 11:47:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:44.554 11:47:57 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:44.816 [2024-11-19 11:47:58.011323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:44.816 [2024-11-19 11:47:58.011578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85056 ] 00:16:44.816 [2024-11-19 11:47:58.158064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.078 [2024-11-19 11:47:58.234517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.078 [2024-11-19 11:47:58.234777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.078 [2024-11-19 11:47:58.234893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.650 11:47:58 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.650 11:47:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:45.650 11:47:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:45.650 11:47:58 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:45.650 11:47:58 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:45.650 11:47:58 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:45.650 11:47:58 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:45.650 11:47:58 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:45.911 11:47:59 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:45.911 11:47:59 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:45.911 11:47:59 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:45.911 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:45.911 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:45.911 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:45.911 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:45.911 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:46.172 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:46.172 { 00:16:46.172 "name": "nvme0n1", 00:16:46.172 "aliases": [ 00:16:46.172 "24efae11-cbc4-49f9-8b8b-fd4684d2578e" 00:16:46.172 ], 00:16:46.172 "product_name": "NVMe disk", 00:16:46.172 "block_size": 4096, 00:16:46.173 "num_blocks": 1310720, 00:16:46.173 "uuid": "24efae11-cbc4-49f9-8b8b-fd4684d2578e", 00:16:46.173 "numa_id": -1, 00:16:46.173 "assigned_rate_limits": { 00:16:46.173 "rw_ios_per_sec": 0, 00:16:46.173 "rw_mbytes_per_sec": 0, 00:16:46.173 "r_mbytes_per_sec": 0, 00:16:46.173 "w_mbytes_per_sec": 0 00:16:46.173 }, 00:16:46.173 "claimed": true, 00:16:46.173 "claim_type": "read_many_write_one", 00:16:46.173 "zoned": false, 00:16:46.173 "supported_io_types": { 00:16:46.173 "read": true, 00:16:46.173 "write": true, 00:16:46.173 "unmap": true, 00:16:46.173 "flush": true, 00:16:46.173 "reset": true, 00:16:46.173 "nvme_admin": true, 00:16:46.173 "nvme_io": true, 00:16:46.173 "nvme_io_md": false, 00:16:46.173 "write_zeroes": true, 00:16:46.173 "zcopy": false, 00:16:46.173 "get_zone_info": false, 00:16:46.173 "zone_management": false, 00:16:46.173 "zone_append": false, 00:16:46.173 "compare": true, 00:16:46.173 "compare_and_write": false, 00:16:46.173 "abort": true, 00:16:46.173 "seek_hole": false, 00:16:46.173 "seek_data": false, 00:16:46.173 "copy": true, 00:16:46.173 "nvme_iov_md": false 00:16:46.173 }, 00:16:46.173 "driver_specific": { 00:16:46.173 "nvme": [ 00:16:46.173 { 00:16:46.173 "pci_address": "0000:00:11.0", 00:16:46.173 "trid": { 00:16:46.173 "trtype": "PCIe", 00:16:46.173 "traddr": "0000:00:11.0" 00:16:46.173 }, 00:16:46.173 "ctrlr_data": { 00:16:46.173 "cntlid": 0, 00:16:46.173 "vendor_id": "0x1b36", 00:16:46.173 "model_number": "QEMU NVMe Ctrl", 00:16:46.173 "serial_number": "12341", 00:16:46.173 "firmware_revision": "8.0.0", 00:16:46.173 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:46.173 "oacs": { 00:16:46.173 "security": 0, 00:16:46.173 "format": 1, 00:16:46.173 "firmware": 0, 00:16:46.173 "ns_manage": 1 00:16:46.173 }, 00:16:46.173 "multi_ctrlr": false, 00:16:46.173 "ana_reporting": false 00:16:46.173 }, 00:16:46.173 "vs": { 00:16:46.173 "nvme_version": "1.4" 00:16:46.173 }, 00:16:46.173 "ns_data": { 00:16:46.173 "id": 1, 00:16:46.173 "can_share": false 00:16:46.173 } 00:16:46.173 } 00:16:46.173 ], 00:16:46.173 "mp_policy": "active_passive" 00:16:46.173 } 00:16:46.173 } 00:16:46.173 ]' 00:16:46.173 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:46.173 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:46.173 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:46.173 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:46.173 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:46.173 11:47:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:46.173 11:47:59 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:46.173 11:47:59 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:46.173 11:47:59 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:46.173 11:47:59 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:46.173 11:47:59 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:46.434 11:47:59 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=50e76d5a-2557-477e-a6d0-091e1885f989 00:16:46.434 11:47:59 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:46.434 11:47:59 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 50e76d5a-2557-477e-a6d0-091e1885f989 00:16:46.695 11:47:59 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e253a074-6784-4f1f-b3d1-01e1c9a016e7 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e253a074-6784-4f1f-b3d1-01e1c9a016e7 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:46.959 11:48:00 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:46.959 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:46.959 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:46.959 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:46.959 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:46.959 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:47.218 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:47.218 { 00:16:47.218 "name": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:47.218 "aliases": [ 00:16:47.218 "lvs/nvme0n1p0" 00:16:47.218 ], 00:16:47.218 "product_name": "Logical Volume", 00:16:47.218 "block_size": 4096, 00:16:47.218 "num_blocks": 26476544, 00:16:47.218 "uuid": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:47.218 "assigned_rate_limits": { 00:16:47.218 "rw_ios_per_sec": 0, 00:16:47.218 "rw_mbytes_per_sec": 0, 00:16:47.218 "r_mbytes_per_sec": 0, 00:16:47.218 "w_mbytes_per_sec": 0 00:16:47.218 }, 00:16:47.218 "claimed": false, 00:16:47.218 "zoned": false, 00:16:47.218 "supported_io_types": { 00:16:47.218 "read": true, 00:16:47.218 "write": true, 00:16:47.218 "unmap": true, 00:16:47.218 "flush": false, 00:16:47.218 "reset": true, 00:16:47.218 "nvme_admin": false, 00:16:47.218 "nvme_io": false, 00:16:47.218 "nvme_io_md": false, 00:16:47.218 "write_zeroes": true, 00:16:47.218 "zcopy": false, 00:16:47.218 "get_zone_info": false, 00:16:47.218 "zone_management": false, 00:16:47.218 "zone_append": false, 00:16:47.218 "compare": false, 00:16:47.218 "compare_and_write": false, 00:16:47.218 "abort": false, 00:16:47.218 "seek_hole": true, 00:16:47.219 "seek_data": true, 00:16:47.219 "copy": false, 00:16:47.219 "nvme_iov_md": false 00:16:47.219 }, 00:16:47.219 "driver_specific": { 00:16:47.219 "lvol": { 00:16:47.219 "lvol_store_uuid": "e253a074-6784-4f1f-b3d1-01e1c9a016e7", 00:16:47.219 "base_bdev": "nvme0n1", 00:16:47.219 "thin_provision": true, 00:16:47.219 "num_allocated_clusters": 0, 00:16:47.219 "snapshot": false, 00:16:47.219 "clone": false, 00:16:47.219 "esnap_clone": false 00:16:47.219 } 00:16:47.219 } 00:16:47.219 } 00:16:47.219 ]' 00:16:47.219 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:47.219 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:47.219 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:47.219 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:47.219 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:47.219 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:47.219 11:48:00 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:47.219 11:48:00 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:47.219 11:48:00 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:47.478 11:48:00 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:47.478 11:48:00 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:47.478 11:48:00 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:47.478 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:47.478 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:47.478 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:47.478 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:47.479 11:48:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:47.738 { 00:16:47.738 "name": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:47.738 "aliases": [ 00:16:47.738 "lvs/nvme0n1p0" 00:16:47.738 ], 00:16:47.738 "product_name": "Logical Volume", 00:16:47.738 "block_size": 4096, 00:16:47.738 "num_blocks": 26476544, 00:16:47.738 "uuid": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:47.738 "assigned_rate_limits": { 00:16:47.738 "rw_ios_per_sec": 0, 00:16:47.738 "rw_mbytes_per_sec": 0, 00:16:47.738 "r_mbytes_per_sec": 0, 00:16:47.738 "w_mbytes_per_sec": 0 00:16:47.738 }, 00:16:47.738 "claimed": false, 00:16:47.738 "zoned": false, 00:16:47.738 "supported_io_types": { 00:16:47.738 "read": true, 00:16:47.738 "write": true, 00:16:47.738 "unmap": true, 00:16:47.738 "flush": false, 00:16:47.738 "reset": true, 00:16:47.738 "nvme_admin": false, 00:16:47.738 "nvme_io": false, 00:16:47.738 "nvme_io_md": false, 00:16:47.738 "write_zeroes": true, 00:16:47.738 "zcopy": false, 00:16:47.738 "get_zone_info": false, 00:16:47.738 "zone_management": false, 00:16:47.738 "zone_append": false, 00:16:47.738 "compare": false, 00:16:47.738 "compare_and_write": false, 00:16:47.738 "abort": false, 00:16:47.738 "seek_hole": true, 00:16:47.738 "seek_data": true, 00:16:47.738 "copy": false, 00:16:47.738 "nvme_iov_md": false 00:16:47.738 }, 00:16:47.738 "driver_specific": { 00:16:47.738 "lvol": { 00:16:47.738 "lvol_store_uuid": "e253a074-6784-4f1f-b3d1-01e1c9a016e7", 00:16:47.738 "base_bdev": "nvme0n1", 00:16:47.738 "thin_provision": true, 00:16:47.738 "num_allocated_clusters": 0, 00:16:47.738 "snapshot": false, 00:16:47.738 "clone": false, 00:16:47.738 "esnap_clone": false 00:16:47.738 } 00:16:47.738 } 00:16:47.738 } 00:16:47.738 ]' 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:47.738 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:47.738 11:48:01 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:47.738 11:48:01 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:47.996 11:48:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:47.996 11:48:01 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:47.996 11:48:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:47.996 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:47.996 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:47.996 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:47.996 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:47.996 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7bbaeb5-ee79-458c-887b-f5235df92a80 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:48.255 { 00:16:48.255 "name": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:48.255 "aliases": [ 00:16:48.255 "lvs/nvme0n1p0" 00:16:48.255 ], 00:16:48.255 "product_name": "Logical Volume", 00:16:48.255 "block_size": 4096, 00:16:48.255 "num_blocks": 26476544, 00:16:48.255 "uuid": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:48.255 "assigned_rate_limits": { 00:16:48.255 "rw_ios_per_sec": 0, 00:16:48.255 "rw_mbytes_per_sec": 0, 00:16:48.255 "r_mbytes_per_sec": 0, 00:16:48.255 "w_mbytes_per_sec": 0 00:16:48.255 }, 00:16:48.255 "claimed": false, 00:16:48.255 "zoned": false, 00:16:48.255 "supported_io_types": { 00:16:48.255 "read": true, 00:16:48.255 "write": true, 00:16:48.255 "unmap": true, 00:16:48.255 "flush": false, 00:16:48.255 "reset": true, 00:16:48.255 "nvme_admin": false, 00:16:48.255 "nvme_io": false, 00:16:48.255 "nvme_io_md": false, 00:16:48.255 "write_zeroes": true, 00:16:48.255 "zcopy": false, 00:16:48.255 "get_zone_info": false, 00:16:48.255 "zone_management": false, 00:16:48.255 "zone_append": false, 00:16:48.255 "compare": false, 00:16:48.255 "compare_and_write": false, 00:16:48.255 "abort": false, 00:16:48.255 "seek_hole": true, 00:16:48.255 "seek_data": true, 00:16:48.255 "copy": false, 00:16:48.255 "nvme_iov_md": false 00:16:48.255 }, 00:16:48.255 "driver_specific": { 00:16:48.255 "lvol": { 00:16:48.255 "lvol_store_uuid": "e253a074-6784-4f1f-b3d1-01e1c9a016e7", 00:16:48.255 "base_bdev": "nvme0n1", 00:16:48.255 "thin_provision": true, 00:16:48.255 "num_allocated_clusters": 0, 00:16:48.255 "snapshot": false, 00:16:48.255 "clone": false, 00:16:48.255 "esnap_clone": false 00:16:48.255 } 00:16:48.255 } 00:16:48.255 } 00:16:48.255 ]' 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:48.255 11:48:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:48.255 11:48:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:48.255 11:48:01 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d7bbaeb5-ee79-458c-887b-f5235df92a80 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:48.516 [2024-11-19 11:48:01.768000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.516 [2024-11-19 11:48:01.768043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:48.516 [2024-11-19 11:48:01.768056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:48.516 [2024-11-19 11:48:01.768073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.516 [2024-11-19 11:48:01.770050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.516 [2024-11-19 11:48:01.770082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.516 [2024-11-19 11:48:01.770098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:16:48.516 [2024-11-19 11:48:01.770115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.770184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:48.517 [2024-11-19 11:48:01.770372] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:48.517 [2024-11-19 11:48:01.770393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.770418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.517 [2024-11-19 11:48:01.770426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:16:48.517 [2024-11-19 11:48:01.770433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.770509] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:16:48.517 [2024-11-19 11:48:01.771793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.771820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:48.517 [2024-11-19 11:48:01.771830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:48.517 [2024-11-19 11:48:01.771836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.778654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.778681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.517 [2024-11-19 11:48:01.778693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.749 ms 00:16:48.517 [2024-11-19 11:48:01.778698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.778802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.778811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.517 [2024-11-19 11:48:01.778820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:48.517 [2024-11-19 11:48:01.778825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.778865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.778873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:48.517 [2024-11-19 11:48:01.778881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:48.517 [2024-11-19 11:48:01.778887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.778916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:48.517 [2024-11-19 11:48:01.780539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.780564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.517 [2024-11-19 11:48:01.780574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:16:48.517 [2024-11-19 11:48:01.780581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.780620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.780641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:48.517 [2024-11-19 11:48:01.780648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:48.517 [2024-11-19 11:48:01.780665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.780691] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:48.517 [2024-11-19 11:48:01.780808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:48.517 [2024-11-19 11:48:01.780821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:48.517 [2024-11-19 11:48:01.780834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:48.517 [2024-11-19 11:48:01.780842] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:48.517 [2024-11-19 11:48:01.780850] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:48.517 [2024-11-19 11:48:01.780866] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:48.517 [2024-11-19 11:48:01.780873] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:48.517 [2024-11-19 11:48:01.780879] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:48.517 [2024-11-19 11:48:01.780886] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:48.517 [2024-11-19 11:48:01.780893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.780899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:48.517 [2024-11-19 11:48:01.780905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:16:48.517 [2024-11-19 11:48:01.780922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.780995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.517 [2024-11-19 11:48:01.781005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:48.517 [2024-11-19 11:48:01.781011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:48.517 [2024-11-19 11:48:01.781018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.517 [2024-11-19 11:48:01.781122] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:48.517 [2024-11-19 11:48:01.781131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:48.517 [2024-11-19 11:48:01.781138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:48.517 [2024-11-19 11:48:01.781170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:48.517 [2024-11-19 11:48:01.781187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.517 [2024-11-19 11:48:01.781201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:48.517 [2024-11-19 11:48:01.781209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:48.517 [2024-11-19 11:48:01.781215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.517 [2024-11-19 11:48:01.781224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:48.517 [2024-11-19 11:48:01.781231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:48.517 [2024-11-19 11:48:01.781239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:48.517 [2024-11-19 11:48:01.781253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:48.517 [2024-11-19 11:48:01.781273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:48.517 [2024-11-19 11:48:01.781293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:48.517 [2024-11-19 11:48:01.781314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:48.517 [2024-11-19 11:48:01.781352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:48.517 [2024-11-19 11:48:01.781371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.517 [2024-11-19 11:48:01.781385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:48.517 [2024-11-19 11:48:01.781392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:48.517 [2024-11-19 11:48:01.781398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.517 [2024-11-19 11:48:01.781416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:48.517 [2024-11-19 11:48:01.781423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:48.517 [2024-11-19 11:48:01.781430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:48.517 [2024-11-19 11:48:01.781443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:48.517 [2024-11-19 11:48:01.781449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781456] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:48.517 [2024-11-19 11:48:01.781463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:48.517 [2024-11-19 11:48:01.781473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.517 [2024-11-19 11:48:01.781479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.517 [2024-11-19 11:48:01.781488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:48.517 [2024-11-19 11:48:01.781494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:48.517 [2024-11-19 11:48:01.781502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:48.517 [2024-11-19 11:48:01.781508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:48.517 [2024-11-19 11:48:01.781515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:48.517 [2024-11-19 11:48:01.781521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:48.518 [2024-11-19 11:48:01.781532] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:48.518 [2024-11-19 11:48:01.781541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:48.518 [2024-11-19 11:48:01.781556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:48.518 [2024-11-19 11:48:01.781564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:48.518 [2024-11-19 11:48:01.781570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:48.518 [2024-11-19 11:48:01.781578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:48.518 [2024-11-19 11:48:01.781583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:48.518 [2024-11-19 11:48:01.781592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:48.518 [2024-11-19 11:48:01.781597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:48.518 [2024-11-19 11:48:01.781605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:48.518 [2024-11-19 11:48:01.781611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:48.518 [2024-11-19 11:48:01.781641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:48.518 [2024-11-19 11:48:01.781649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:48.518 [2024-11-19 11:48:01.781661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:48.518 [2024-11-19 11:48:01.781668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:48.518 [2024-11-19 11:48:01.781674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:48.518 [2024-11-19 11:48:01.781682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.518 [2024-11-19 11:48:01.781688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:48.518 [2024-11-19 11:48:01.781700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:16:48.518 [2024-11-19 11:48:01.781706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.518 [2024-11-19 11:48:01.781778] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:48.518 [2024-11-19 11:48:01.781786] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:51.064 [2024-11-19 11:48:04.242017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.242209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:51.064 [2024-11-19 11:48:04.242296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2460.220 ms 00:16:51.064 [2024-11-19 11:48:04.242323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.263928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.264349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.064 [2024-11-19 11:48:04.264675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.456 ms 00:16:51.064 [2024-11-19 11:48:04.264832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.265446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.265654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:51.064 [2024-11-19 11:48:04.265804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:16:51.064 [2024-11-19 11:48:04.265942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.278342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.278493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.064 [2024-11-19 11:48:04.278559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.256 ms 00:16:51.064 [2024-11-19 11:48:04.278595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.278682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.278772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.064 [2024-11-19 11:48:04.278789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:51.064 [2024-11-19 11:48:04.278808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.279220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.279251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.064 [2024-11-19 11:48:04.279273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:16:51.064 [2024-11-19 11:48:04.279282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.279440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.279454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.064 [2024-11-19 11:48:04.279468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:16:51.064 [2024-11-19 11:48:04.279477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.286612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.286658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.064 [2024-11-19 11:48:04.286670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.097 ms 00:16:51.064 [2024-11-19 11:48:04.286689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.296251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.064 [2024-11-19 11:48:04.313599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.313635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.064 [2024-11-19 11:48:04.313646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.816 ms 00:16:51.064 [2024-11-19 11:48:04.313656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.371764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.371810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:51.064 [2024-11-19 11:48:04.371823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.022 ms 00:16:51.064 [2024-11-19 11:48:04.371837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.372029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.372043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.064 [2024-11-19 11:48:04.372067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:16:51.064 [2024-11-19 11:48:04.372076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.375525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.375572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:51.064 [2024-11-19 11:48:04.375583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:16:51.064 [2024-11-19 11:48:04.375593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.378670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.378703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:51.064 [2024-11-19 11:48:04.378713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.027 ms 00:16:51.064 [2024-11-19 11:48:04.378723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.379056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.379075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.064 [2024-11-19 11:48:04.379087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:16:51.064 [2024-11-19 11:48:04.379098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.409479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.409518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:51.064 [2024-11-19 11:48:04.409529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.349 ms 00:16:51.064 [2024-11-19 11:48:04.409538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.413870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.413907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:51.064 [2024-11-19 11:48:04.413922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:16:51.064 [2024-11-19 11:48:04.413935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.417212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.417246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:51.064 [2024-11-19 11:48:04.417256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.230 ms 00:16:51.064 [2024-11-19 11:48:04.417264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.420538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.420572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.064 [2024-11-19 11:48:04.420582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:16:51.064 [2024-11-19 11:48:04.420595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.064 [2024-11-19 11:48:04.420660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.064 [2024-11-19 11:48:04.420673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.065 [2024-11-19 11:48:04.420682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:51.065 [2024-11-19 11:48:04.420693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.065 [2024-11-19 11:48:04.420771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.065 [2024-11-19 11:48:04.420783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.065 [2024-11-19 11:48:04.420803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:51.065 [2024-11-19 11:48:04.420813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.065 [2024-11-19 11:48:04.421859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.065 [2024-11-19 11:48:04.422867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2653.522 ms, result 0 00:16:51.065 [2024-11-19 11:48:04.423405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:51.065 { 00:16:51.065 "name": "ftl0", 00:16:51.065 "uuid": "9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c" 00:16:51.065 } 00:16:51.065 11:48:04 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:51.065 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:51.065 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:51.065 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:51.065 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:51.065 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:51.065 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:51.326 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:51.587 [ 00:16:51.587 { 00:16:51.587 "name": "ftl0", 00:16:51.587 "aliases": [ 00:16:51.587 "9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c" 00:16:51.587 ], 00:16:51.587 "product_name": "FTL disk", 00:16:51.587 "block_size": 4096, 00:16:51.587 "num_blocks": 23592960, 00:16:51.587 "uuid": "9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c", 00:16:51.587 "assigned_rate_limits": { 00:16:51.587 "rw_ios_per_sec": 0, 00:16:51.587 "rw_mbytes_per_sec": 0, 00:16:51.587 "r_mbytes_per_sec": 0, 00:16:51.587 "w_mbytes_per_sec": 0 00:16:51.587 }, 00:16:51.588 "claimed": false, 00:16:51.588 "zoned": false, 00:16:51.588 "supported_io_types": { 00:16:51.588 "read": true, 00:16:51.588 "write": true, 00:16:51.588 "unmap": true, 00:16:51.588 "flush": true, 00:16:51.588 "reset": false, 00:16:51.588 "nvme_admin": false, 00:16:51.588 "nvme_io": false, 00:16:51.588 "nvme_io_md": false, 00:16:51.588 "write_zeroes": true, 00:16:51.588 "zcopy": false, 00:16:51.588 "get_zone_info": false, 00:16:51.588 "zone_management": false, 00:16:51.588 "zone_append": false, 00:16:51.588 "compare": false, 00:16:51.588 "compare_and_write": false, 00:16:51.588 "abort": false, 00:16:51.588 "seek_hole": false, 00:16:51.588 "seek_data": false, 00:16:51.588 "copy": false, 00:16:51.588 "nvme_iov_md": false 00:16:51.588 }, 00:16:51.588 "driver_specific": { 00:16:51.588 "ftl": { 00:16:51.588 "base_bdev": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:51.588 "cache": "nvc0n1p0" 00:16:51.588 } 00:16:51.588 } 00:16:51.588 } 00:16:51.588 ] 00:16:51.588 11:48:04 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:51.588 11:48:04 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:51.588 11:48:04 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:51.849 11:48:05 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:51.849 11:48:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:51.850 11:48:05 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:51.850 { 00:16:51.850 "name": "ftl0", 00:16:51.850 "aliases": [ 00:16:51.850 "9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c" 00:16:51.850 ], 00:16:51.850 "product_name": "FTL disk", 00:16:51.850 "block_size": 4096, 00:16:51.850 "num_blocks": 23592960, 00:16:51.850 "uuid": "9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c", 00:16:51.850 "assigned_rate_limits": { 00:16:51.850 "rw_ios_per_sec": 0, 00:16:51.850 "rw_mbytes_per_sec": 0, 00:16:51.850 "r_mbytes_per_sec": 0, 00:16:51.850 "w_mbytes_per_sec": 0 00:16:51.850 }, 00:16:51.850 "claimed": false, 00:16:51.850 "zoned": false, 00:16:51.850 "supported_io_types": { 00:16:51.850 "read": true, 00:16:51.850 "write": true, 00:16:51.850 "unmap": true, 00:16:51.850 "flush": true, 00:16:51.850 "reset": false, 00:16:51.850 "nvme_admin": false, 00:16:51.850 "nvme_io": false, 00:16:51.850 "nvme_io_md": false, 00:16:51.850 "write_zeroes": true, 00:16:51.850 "zcopy": false, 00:16:51.850 "get_zone_info": false, 00:16:51.850 "zone_management": false, 00:16:51.850 "zone_append": false, 00:16:51.850 "compare": false, 00:16:51.850 "compare_and_write": false, 00:16:51.850 "abort": false, 00:16:51.850 "seek_hole": false, 00:16:51.850 "seek_data": false, 00:16:51.850 "copy": false, 00:16:51.850 "nvme_iov_md": false 00:16:51.850 }, 00:16:51.850 "driver_specific": { 00:16:51.850 "ftl": { 00:16:51.850 "base_bdev": "d7bbaeb5-ee79-458c-887b-f5235df92a80", 00:16:51.850 "cache": "nvc0n1p0" 00:16:51.850 } 00:16:51.850 } 00:16:51.850 } 00:16:51.850 ]' 00:16:51.850 11:48:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:52.111 11:48:05 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:52.111 11:48:05 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:52.111 [2024-11-19 11:48:05.447854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.447888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:52.111 [2024-11-19 11:48:05.447899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:52.111 [2024-11-19 11:48:05.447905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.447941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:52.111 [2024-11-19 11:48:05.448497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.448520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:52.111 [2024-11-19 11:48:05.448527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:16:52.111 [2024-11-19 11:48:05.448535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.449005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.449033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:52.111 [2024-11-19 11:48:05.449039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:16:52.111 [2024-11-19 11:48:05.449049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.451782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.451802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:52.111 [2024-11-19 11:48:05.451809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:16:52.111 [2024-11-19 11:48:05.451817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.457011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.457039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:52.111 [2024-11-19 11:48:05.457047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.147 ms 00:16:52.111 [2024-11-19 11:48:05.457057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.458699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.458825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:52.111 [2024-11-19 11:48:05.458837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:16:52.111 [2024-11-19 11:48:05.458845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.463593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.463624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:52.111 [2024-11-19 11:48:05.463633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.711 ms 00:16:52.111 [2024-11-19 11:48:05.463641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.463793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.463807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:52.111 [2024-11-19 11:48:05.463815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:16:52.111 [2024-11-19 11:48:05.463825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.465722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.465829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:52.111 [2024-11-19 11:48:05.465841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:16:52.111 [2024-11-19 11:48:05.465852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.467241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.467273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:52.111 [2024-11-19 11:48:05.467280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:16:52.111 [2024-11-19 11:48:05.467288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.468385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.468426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:52.111 [2024-11-19 11:48:05.468434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:16:52.111 [2024-11-19 11:48:05.468441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.469501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.111 [2024-11-19 11:48:05.469533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:52.111 [2024-11-19 11:48:05.469541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:16:52.111 [2024-11-19 11:48:05.469547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.111 [2024-11-19 11:48:05.469585] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:52.111 [2024-11-19 11:48:05.469598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:52.111 [2024-11-19 11:48:05.469607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.469995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.470001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.470008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.470014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:52.112 [2024-11-19 11:48:05.470021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:52.113 [2024-11-19 11:48:05.470295] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:52.113 [2024-11-19 11:48:05.470301] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:16:52.113 [2024-11-19 11:48:05.470308] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:52.113 [2024-11-19 11:48:05.470314] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:52.113 [2024-11-19 11:48:05.470322] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:52.113 [2024-11-19 11:48:05.470328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:52.113 [2024-11-19 11:48:05.470335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:52.113 [2024-11-19 11:48:05.470341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:52.113 [2024-11-19 11:48:05.470351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:52.113 [2024-11-19 11:48:05.470356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:52.113 [2024-11-19 11:48:05.470362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:52.113 [2024-11-19 11:48:05.470369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.113 [2024-11-19 11:48:05.470376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:52.113 [2024-11-19 11:48:05.470392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:16:52.113 [2024-11-19 11:48:05.470401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.113 [2024-11-19 11:48:05.471986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.113 [2024-11-19 11:48:05.472010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:52.113 [2024-11-19 11:48:05.472018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:16:52.113 [2024-11-19 11:48:05.472027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.113 [2024-11-19 11:48:05.472131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.113 [2024-11-19 11:48:05.472141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:52.113 [2024-11-19 11:48:05.472149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:52.113 [2024-11-19 11:48:05.472156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.113 [2024-11-19 11:48:05.478135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.113 [2024-11-19 11:48:05.478167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.113 [2024-11-19 11:48:05.478175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.113 [2024-11-19 11:48:05.478185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.113 [2024-11-19 11:48:05.478274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.113 [2024-11-19 11:48:05.478284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.113 [2024-11-19 11:48:05.478291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.478300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.478366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.478377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.114 [2024-11-19 11:48:05.478384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.478391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.478433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.478443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.114 [2024-11-19 11:48:05.478449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.478457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.489947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.489986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.114 [2024-11-19 11:48:05.489996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.490006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.114 [2024-11-19 11:48:05.499086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.114 [2024-11-19 11:48:05.499170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.114 [2024-11-19 11:48:05.499246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.114 [2024-11-19 11:48:05.499354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:52.114 [2024-11-19 11:48:05.499440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.114 [2024-11-19 11:48:05.499510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.114 [2024-11-19 11:48:05.499577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.114 [2024-11-19 11:48:05.499583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.114 [2024-11-19 11:48:05.499590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.114 [2024-11-19 11:48:05.499763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.874 ms, result 0 00:16:52.114 true 00:16:52.375 11:48:05 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85056 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85056 ']' 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85056 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85056 00:16:52.375 killing process with pid 85056 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85056' 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85056 00:16:52.375 11:48:05 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85056 00:16:57.664 11:48:10 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:58.234 65536+0 records in 00:16:58.234 65536+0 records out 00:16:58.234 268435456 bytes (268 MB, 256 MiB) copied, 1.08297 s, 248 MB/s 00:16:58.234 11:48:11 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:58.493 [2024-11-19 11:48:11.658572] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:58.493 [2024-11-19 11:48:11.658696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85227 ] 00:16:58.493 [2024-11-19 11:48:11.794166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.493 [2024-11-19 11:48:11.834662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.751 [2024-11-19 11:48:11.933588] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.751 [2024-11-19 11:48:11.933643] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.751 [2024-11-19 11:48:12.088146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.088185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.751 [2024-11-19 11:48:12.088196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.751 [2024-11-19 11:48:12.088203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.090039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.090069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.751 [2024-11-19 11:48:12.090079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.821 ms 00:16:58.751 [2024-11-19 11:48:12.090084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.090382] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.751 [2024-11-19 11:48:12.090988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.751 [2024-11-19 11:48:12.091010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.091017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.751 [2024-11-19 11:48:12.091029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:16:58.751 [2024-11-19 11:48:12.091035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.092443] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.751 [2024-11-19 11:48:12.095375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.095401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.751 [2024-11-19 11:48:12.095426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.933 ms 00:16:58.751 [2024-11-19 11:48:12.095434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.095486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.095494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.751 [2024-11-19 11:48:12.095501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:58.751 [2024-11-19 11:48:12.095506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.101831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.101850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.751 [2024-11-19 11:48:12.101858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.294 ms 00:16:58.751 [2024-11-19 11:48:12.101863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.101955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.101965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.751 [2024-11-19 11:48:12.101972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:58.751 [2024-11-19 11:48:12.101978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.101998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.102007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.751 [2024-11-19 11:48:12.102017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.751 [2024-11-19 11:48:12.102023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.102038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.751 [2024-11-19 11:48:12.103594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.103612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.751 [2024-11-19 11:48:12.103620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:16:58.751 [2024-11-19 11:48:12.103625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.103662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.103669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.751 [2024-11-19 11:48:12.103677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:58.751 [2024-11-19 11:48:12.103683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.103696] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.751 [2024-11-19 11:48:12.103712] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:58.751 [2024-11-19 11:48:12.103742] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.751 [2024-11-19 11:48:12.103757] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:58.751 [2024-11-19 11:48:12.103837] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:58.751 [2024-11-19 11:48:12.103846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.751 [2024-11-19 11:48:12.103854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:58.751 [2024-11-19 11:48:12.103862] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.751 [2024-11-19 11:48:12.103869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.751 [2024-11-19 11:48:12.103875] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.751 [2024-11-19 11:48:12.103880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.751 [2024-11-19 11:48:12.103886] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:58.751 [2024-11-19 11:48:12.103892] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:58.751 [2024-11-19 11:48:12.103898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.103907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.751 [2024-11-19 11:48:12.103913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:58.751 [2024-11-19 11:48:12.103919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.103985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.751 [2024-11-19 11:48:12.103991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.751 [2024-11-19 11:48:12.103997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:58.751 [2024-11-19 11:48:12.104004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.751 [2024-11-19 11:48:12.104081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.751 [2024-11-19 11:48:12.104093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.751 [2024-11-19 11:48:12.104102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.751 [2024-11-19 11:48:12.104108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.751 [2024-11-19 11:48:12.104124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.751 [2024-11-19 11:48:12.104144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.751 [2024-11-19 11:48:12.104152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.751 [2024-11-19 11:48:12.104163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.751 [2024-11-19 11:48:12.104168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.751 [2024-11-19 11:48:12.104173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.751 [2024-11-19 11:48:12.104178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.751 [2024-11-19 11:48:12.104184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:58.751 [2024-11-19 11:48:12.104189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.751 [2024-11-19 11:48:12.104199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:58.751 [2024-11-19 11:48:12.104205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.751 [2024-11-19 11:48:12.104215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.751 [2024-11-19 11:48:12.104226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.751 [2024-11-19 11:48:12.104232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.751 [2024-11-19 11:48:12.104241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.752 [2024-11-19 11:48:12.104247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.752 [2024-11-19 11:48:12.104253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.752 [2024-11-19 11:48:12.104259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.752 [2024-11-19 11:48:12.104265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.752 [2024-11-19 11:48:12.104270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:58.752 [2024-11-19 11:48:12.104276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.752 [2024-11-19 11:48:12.104282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.752 [2024-11-19 11:48:12.104288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:58.752 [2024-11-19 11:48:12.104294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.752 [2024-11-19 11:48:12.104300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.752 [2024-11-19 11:48:12.104306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:58.752 [2024-11-19 11:48:12.104312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.752 [2024-11-19 11:48:12.104319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:58.752 [2024-11-19 11:48:12.104325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:58.752 [2024-11-19 11:48:12.104331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.752 [2024-11-19 11:48:12.104340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:58.752 [2024-11-19 11:48:12.104346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:58.752 [2024-11-19 11:48:12.104352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.752 [2024-11-19 11:48:12.104358] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.752 [2024-11-19 11:48:12.104365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.752 [2024-11-19 11:48:12.104371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.752 [2024-11-19 11:48:12.104381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.752 [2024-11-19 11:48:12.104387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.752 [2024-11-19 11:48:12.104393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.752 [2024-11-19 11:48:12.104399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.752 [2024-11-19 11:48:12.104415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.752 [2024-11-19 11:48:12.104422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.752 [2024-11-19 11:48:12.104428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.752 [2024-11-19 11:48:12.104435] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.752 [2024-11-19 11:48:12.104444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.752 [2024-11-19 11:48:12.104460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:58.752 [2024-11-19 11:48:12.104468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.752 [2024-11-19 11:48:12.104474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:58.752 [2024-11-19 11:48:12.104481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:58.752 [2024-11-19 11:48:12.104488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:58.752 [2024-11-19 11:48:12.104494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:58.752 [2024-11-19 11:48:12.104509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:58.752 [2024-11-19 11:48:12.104516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:58.752 [2024-11-19 11:48:12.104522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:58.752 [2024-11-19 11:48:12.104557] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.752 [2024-11-19 11:48:12.104565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.752 [2024-11-19 11:48:12.104580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.752 [2024-11-19 11:48:12.104588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.752 [2024-11-19 11:48:12.104594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.752 [2024-11-19 11:48:12.104601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.104613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.752 [2024-11-19 11:48:12.104620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:16:58.752 [2024-11-19 11:48:12.104626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.124286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.124319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.752 [2024-11-19 11:48:12.124328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.620 ms 00:16:58.752 [2024-11-19 11:48:12.124338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.124454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.124465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.752 [2024-11-19 11:48:12.124473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:58.752 [2024-11-19 11:48:12.124479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.136024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.136058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.752 [2024-11-19 11:48:12.136072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.527 ms 00:16:58.752 [2024-11-19 11:48:12.136083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.136191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.136209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.752 [2024-11-19 11:48:12.136228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.752 [2024-11-19 11:48:12.136239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.136704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.136724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.752 [2024-11-19 11:48:12.136732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:16:58.752 [2024-11-19 11:48:12.136738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.136855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.136866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.752 [2024-11-19 11:48:12.136876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:16:58.752 [2024-11-19 11:48:12.136882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.142633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.142652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.752 [2024-11-19 11:48:12.142660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.730 ms 00:16:58.752 [2024-11-19 11:48:12.142670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.145298] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:58.752 [2024-11-19 11:48:12.145329] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.752 [2024-11-19 11:48:12.145339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.145346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.752 [2024-11-19 11:48:12.145353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:16:58.752 [2024-11-19 11:48:12.145358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.156830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.156853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.752 [2024-11-19 11:48:12.156862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.435 ms 00:16:58.752 [2024-11-19 11:48:12.156869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.158442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.158463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.752 [2024-11-19 11:48:12.158470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.518 ms 00:16:58.752 [2024-11-19 11:48:12.158475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.159846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.159869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.752 [2024-11-19 11:48:12.159881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:16:58.752 [2024-11-19 11:48:12.159887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.752 [2024-11-19 11:48:12.160153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.752 [2024-11-19 11:48:12.160168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.752 [2024-11-19 11:48:12.160175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:16:58.752 [2024-11-19 11:48:12.160184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.009 [2024-11-19 11:48:12.177976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.009 [2024-11-19 11:48:12.178005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:59.009 [2024-11-19 11:48:12.178015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.773 ms 00:16:59.009 [2024-11-19 11:48:12.178022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.009 [2024-11-19 11:48:12.184022] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:59.009 [2024-11-19 11:48:12.199273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.009 [2024-11-19 11:48:12.199301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:59.009 [2024-11-19 11:48:12.199311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.199 ms 00:16:59.009 [2024-11-19 11:48:12.199318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.009 [2024-11-19 11:48:12.199428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.009 [2024-11-19 11:48:12.199437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:59.009 [2024-11-19 11:48:12.199445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:59.009 [2024-11-19 11:48:12.199451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.009 [2024-11-19 11:48:12.199501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.009 [2024-11-19 11:48:12.199508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:59.009 [2024-11-19 11:48:12.199515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:59.009 [2024-11-19 11:48:12.199521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.009 [2024-11-19 11:48:12.199542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.009 [2024-11-19 11:48:12.199549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:59.009 [2024-11-19 11:48:12.199555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:59.009 [2024-11-19 11:48:12.199561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.009 [2024-11-19 11:48:12.199590] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:59.009 [2024-11-19 11:48:12.199598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.009 [2024-11-19 11:48:12.199605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:59.009 [2024-11-19 11:48:12.199612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:59.009 [2024-11-19 11:48:12.199617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.010 [2024-11-19 11:48:12.203574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.010 [2024-11-19 11:48:12.203598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:59.010 [2024-11-19 11:48:12.203607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.941 ms 00:16:59.010 [2024-11-19 11:48:12.203613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.010 [2024-11-19 11:48:12.203690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:59.010 [2024-11-19 11:48:12.203698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:59.010 [2024-11-19 11:48:12.203707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:59.010 [2024-11-19 11:48:12.203714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:59.010 [2024-11-19 11:48:12.204909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.010 [2024-11-19 11:48:12.205774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.488 ms, result 0 00:16:59.010 [2024-11-19 11:48:12.206422] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:59.010 [2024-11-19 11:48:12.216000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:59.944  [2024-11-19T11:48:14.289Z] Copying: 19/256 [MB] (19 MBps) [2024-11-19T11:48:15.224Z] Copying: 41/256 [MB] (22 MBps) [2024-11-19T11:48:16.598Z] Copying: 57/256 [MB] (16 MBps) [2024-11-19T11:48:17.531Z] Copying: 76/256 [MB] (18 MBps) [2024-11-19T11:48:18.514Z] Copying: 97/256 [MB] (21 MBps) [2024-11-19T11:48:19.455Z] Copying: 117/256 [MB] (19 MBps) [2024-11-19T11:48:20.396Z] Copying: 138/256 [MB] (20 MBps) [2024-11-19T11:48:21.335Z] Copying: 155/256 [MB] (17 MBps) [2024-11-19T11:48:22.273Z] Copying: 170/256 [MB] (15 MBps) [2024-11-19T11:48:23.658Z] Copying: 188/256 [MB] (17 MBps) [2024-11-19T11:48:24.228Z] Copying: 207/256 [MB] (19 MBps) [2024-11-19T11:48:25.614Z] Copying: 223/256 [MB] (15 MBps) [2024-11-19T11:48:26.557Z] Copying: 238/256 [MB] (15 MBps) [2024-11-19T11:48:27.131Z] Copying: 254488/262144 [kB] (9976 kBps) [2024-11-19T11:48:27.131Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-19 11:48:26.987690] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:13.719 [2024-11-19 11:48:26.990149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:26.990200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:13.719 [2024-11-19 11:48:26.990219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:13.719 [2024-11-19 11:48:26.990234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:26.990258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:13.719 [2024-11-19 11:48:26.991206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:26.991248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:13.719 [2024-11-19 11:48:26.991261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:17:13.719 [2024-11-19 11:48:26.991273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:26.994596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:26.994635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:13.719 [2024-11-19 11:48:26.994648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:17:13.719 [2024-11-19 11:48:26.994657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.002822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.002870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:13.719 [2024-11-19 11:48:27.002882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.145 ms 00:17:13.719 [2024-11-19 11:48:27.002890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.009968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.010009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:13.719 [2024-11-19 11:48:27.010021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.993 ms 00:17:13.719 [2024-11-19 11:48:27.010030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.013189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.013233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:13.719 [2024-11-19 11:48:27.013243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:17:13.719 [2024-11-19 11:48:27.013252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.019470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.019513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:13.719 [2024-11-19 11:48:27.019535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:17:13.719 [2024-11-19 11:48:27.019545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.019685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.019698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:13.719 [2024-11-19 11:48:27.019709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:13.719 [2024-11-19 11:48:27.019719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.023255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.023295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:13.719 [2024-11-19 11:48:27.023306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.515 ms 00:17:13.719 [2024-11-19 11:48:27.023314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.026548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.026588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:13.719 [2024-11-19 11:48:27.026599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:17:13.719 [2024-11-19 11:48:27.026607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.029133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.029173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:13.719 [2024-11-19 11:48:27.029185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.482 ms 00:17:13.719 [2024-11-19 11:48:27.029193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.031675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.719 [2024-11-19 11:48:27.031715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:13.719 [2024-11-19 11:48:27.031726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:17:13.719 [2024-11-19 11:48:27.031735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.719 [2024-11-19 11:48:27.031777] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:13.719 [2024-11-19 11:48:27.031802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:13.719 [2024-11-19 11:48:27.031813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.031996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:13.720 [2024-11-19 11:48:27.032555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:13.721 [2024-11-19 11:48:27.032660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:13.721 [2024-11-19 11:48:27.032670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:17:13.721 [2024-11-19 11:48:27.032679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:13.721 [2024-11-19 11:48:27.032687] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:13.721 [2024-11-19 11:48:27.032694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:13.721 [2024-11-19 11:48:27.032703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:13.721 [2024-11-19 11:48:27.032710] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:13.721 [2024-11-19 11:48:27.032720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:13.721 [2024-11-19 11:48:27.032728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:13.721 [2024-11-19 11:48:27.032734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:13.721 [2024-11-19 11:48:27.032740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:13.721 [2024-11-19 11:48:27.032749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.721 [2024-11-19 11:48:27.032757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:13.721 [2024-11-19 11:48:27.032778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:17:13.721 [2024-11-19 11:48:27.032786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.035845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.721 [2024-11-19 11:48:27.035869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:13.721 [2024-11-19 11:48:27.035880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:17:13.721 [2024-11-19 11:48:27.035888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.036052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:13.721 [2024-11-19 11:48:27.036069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:13.721 [2024-11-19 11:48:27.036078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:13.721 [2024-11-19 11:48:27.036086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.045699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.045743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:13.721 [2024-11-19 11:48:27.045754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.045762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.045833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.045846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:13.721 [2024-11-19 11:48:27.045855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.045863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.045915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.045926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:13.721 [2024-11-19 11:48:27.045935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.045944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.045976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.045985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:13.721 [2024-11-19 11:48:27.045997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.046006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.065376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.065436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:13.721 [2024-11-19 11:48:27.065449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.065457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.080908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.080960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:13.721 [2024-11-19 11:48:27.080982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.080991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.081061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:13.721 [2024-11-19 11:48:27.081071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.081080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.081127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:13.721 [2024-11-19 11:48:27.081138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.081151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.081259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:13.721 [2024-11-19 11:48:27.081268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.081277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.081336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:13.721 [2024-11-19 11:48:27.081345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.081361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.081461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:13.721 [2024-11-19 11:48:27.081470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.081479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:13.721 [2024-11-19 11:48:27.081553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:13.721 [2024-11-19 11:48:27.081563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:13.721 [2024-11-19 11:48:27.081577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:13.721 [2024-11-19 11:48:27.081770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.584 ms, result 0 00:17:13.982 00:17:13.982 00:17:14.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.243 11:48:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85390 00:17:14.243 11:48:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85390 00:17:14.243 11:48:27 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85390 ']' 00:17:14.243 11:48:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.243 11:48:27 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.243 11:48:27 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.243 11:48:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:14.243 11:48:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.243 11:48:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:14.243 [2024-11-19 11:48:27.478644] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:14.243 [2024-11-19 11:48:27.479056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85390 ] 00:17:14.243 [2024-11-19 11:48:27.616017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.505 [2024-11-19 11:48:27.687829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.077 11:48:28 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.077 11:48:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:15.077 11:48:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:15.339 [2024-11-19 11:48:28.538209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.339 [2024-11-19 11:48:28.538305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:15.339 [2024-11-19 11:48:28.717829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.339 [2024-11-19 11:48:28.717891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:15.339 [2024-11-19 11:48:28.717908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:15.339 [2024-11-19 11:48:28.717919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.339 [2024-11-19 11:48:28.720673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.339 [2024-11-19 11:48:28.720727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.339 [2024-11-19 11:48:28.720740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.728 ms 00:17:15.339 [2024-11-19 11:48:28.720750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.339 [2024-11-19 11:48:28.720854] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:15.339 [2024-11-19 11:48:28.721180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:15.339 [2024-11-19 11:48:28.721208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.339 [2024-11-19 11:48:28.721219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.339 [2024-11-19 11:48:28.721238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:15.339 [2024-11-19 11:48:28.721253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.723566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:15.340 [2024-11-19 11:48:28.728387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.728503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:15.340 [2024-11-19 11:48:28.728519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:17:15.340 [2024-11-19 11:48:28.728528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.728617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.728629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:15.340 [2024-11-19 11:48:28.728644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:15.340 [2024-11-19 11:48:28.728652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.740022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.740067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.340 [2024-11-19 11:48:28.740082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.307 ms 00:17:15.340 [2024-11-19 11:48:28.740094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.740258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.740272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.340 [2024-11-19 11:48:28.740292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:15.340 [2024-11-19 11:48:28.740300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.740331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.740340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:15.340 [2024-11-19 11:48:28.740351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:15.340 [2024-11-19 11:48:28.740363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.740393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:15.340 [2024-11-19 11:48:28.743073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.743120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.340 [2024-11-19 11:48:28.743132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:17:15.340 [2024-11-19 11:48:28.743149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.743197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.743211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:15.340 [2024-11-19 11:48:28.743221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:15.340 [2024-11-19 11:48:28.743232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.743256] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:15.340 [2024-11-19 11:48:28.743285] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:15.340 [2024-11-19 11:48:28.743328] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:15.340 [2024-11-19 11:48:28.743354] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:15.340 [2024-11-19 11:48:28.743483] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:15.340 [2024-11-19 11:48:28.743502] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:15.340 [2024-11-19 11:48:28.743518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:15.340 [2024-11-19 11:48:28.743537] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:15.340 [2024-11-19 11:48:28.743547] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:15.340 [2024-11-19 11:48:28.743564] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:15.340 [2024-11-19 11:48:28.743572] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:15.340 [2024-11-19 11:48:28.743582] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:15.340 [2024-11-19 11:48:28.743594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:15.340 [2024-11-19 11:48:28.743605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.743618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:15.340 [2024-11-19 11:48:28.743629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:17:15.340 [2024-11-19 11:48:28.743637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.743728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.340 [2024-11-19 11:48:28.743740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:15.340 [2024-11-19 11:48:28.743752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:15.340 [2024-11-19 11:48:28.743760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.340 [2024-11-19 11:48:28.743865] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:15.340 [2024-11-19 11:48:28.743877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:15.340 [2024-11-19 11:48:28.743890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.340 [2024-11-19 11:48:28.743901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.340 [2024-11-19 11:48:28.743914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:15.340 [2024-11-19 11:48:28.743921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:15.340 [2024-11-19 11:48:28.743932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:15.340 [2024-11-19 11:48:28.743940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:15.340 [2024-11-19 11:48:28.743950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:15.340 [2024-11-19 11:48:28.743957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.340 [2024-11-19 11:48:28.743967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:15.340 [2024-11-19 11:48:28.743976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:15.340 [2024-11-19 11:48:28.743989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.340 [2024-11-19 11:48:28.743998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:15.340 [2024-11-19 11:48:28.744007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:15.340 [2024-11-19 11:48:28.744014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:15.340 [2024-11-19 11:48:28.744032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:15.340 [2024-11-19 11:48:28.744042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:15.340 [2024-11-19 11:48:28.744061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.340 [2024-11-19 11:48:28.744076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:15.340 [2024-11-19 11:48:28.744082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.340 [2024-11-19 11:48:28.744099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:15.340 [2024-11-19 11:48:28.744108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.340 [2024-11-19 11:48:28.744125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:15.340 [2024-11-19 11:48:28.744133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.340 [2024-11-19 11:48:28.744148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:15.340 [2024-11-19 11:48:28.744158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.340 [2024-11-19 11:48:28.744193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:15.340 [2024-11-19 11:48:28.744201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:15.340 [2024-11-19 11:48:28.744212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.340 [2024-11-19 11:48:28.744220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:15.340 [2024-11-19 11:48:28.744231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:15.340 [2024-11-19 11:48:28.744239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:15.340 [2024-11-19 11:48:28.744256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:15.340 [2024-11-19 11:48:28.744265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.340 [2024-11-19 11:48:28.744272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:15.340 [2024-11-19 11:48:28.744287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:15.340 [2024-11-19 11:48:28.744297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.341 [2024-11-19 11:48:28.744312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.341 [2024-11-19 11:48:28.744321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:15.341 [2024-11-19 11:48:28.744331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:15.341 [2024-11-19 11:48:28.744338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:15.341 [2024-11-19 11:48:28.744348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:15.341 [2024-11-19 11:48:28.744356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:15.341 [2024-11-19 11:48:28.744368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:15.341 [2024-11-19 11:48:28.744377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:15.341 [2024-11-19 11:48:28.744389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:15.341 [2024-11-19 11:48:28.744429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:15.341 [2024-11-19 11:48:28.744439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:15.341 [2024-11-19 11:48:28.744452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:15.341 [2024-11-19 11:48:28.744461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:15.341 [2024-11-19 11:48:28.744472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:15.341 [2024-11-19 11:48:28.744482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:15.341 [2024-11-19 11:48:28.744492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:15.341 [2024-11-19 11:48:28.744500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:15.341 [2024-11-19 11:48:28.744512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:15.341 [2024-11-19 11:48:28.744569] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:15.341 [2024-11-19 11:48:28.744582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:15.341 [2024-11-19 11:48:28.744600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:15.341 [2024-11-19 11:48:28.744608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:15.341 [2024-11-19 11:48:28.744619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:15.341 [2024-11-19 11:48:28.744627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.341 [2024-11-19 11:48:28.744642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:15.341 [2024-11-19 11:48:28.744654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:17:15.341 [2024-11-19 11:48:28.744666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.764797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.764848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.604 [2024-11-19 11:48:28.764860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.048 ms 00:17:15.604 [2024-11-19 11:48:28.764872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.765007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.765027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.604 [2024-11-19 11:48:28.765040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:15.604 [2024-11-19 11:48:28.765051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.781523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.781579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.604 [2024-11-19 11:48:28.781590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.447 ms 00:17:15.604 [2024-11-19 11:48:28.781606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.781677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.781694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.604 [2024-11-19 11:48:28.781704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:15.604 [2024-11-19 11:48:28.781720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.782401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.782467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.604 [2024-11-19 11:48:28.782480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:17:15.604 [2024-11-19 11:48:28.782497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.782666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.782688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.604 [2024-11-19 11:48:28.782700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:15.604 [2024-11-19 11:48:28.782715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.807391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.807483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.604 [2024-11-19 11:48:28.807498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.647 ms 00:17:15.604 [2024-11-19 11:48:28.807509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.812438] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:15.604 [2024-11-19 11:48:28.812493] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:15.604 [2024-11-19 11:48:28.812506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.812518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:15.604 [2024-11-19 11:48:28.812529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:17:15.604 [2024-11-19 11:48:28.812540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.829303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.829375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:15.604 [2024-11-19 11:48:28.829388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.677 ms 00:17:15.604 [2024-11-19 11:48:28.829403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.832903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.832956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:15.604 [2024-11-19 11:48:28.832968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:17:15.604 [2024-11-19 11:48:28.832979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.835754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.835806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:15.604 [2024-11-19 11:48:28.835817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:17:15.604 [2024-11-19 11:48:28.835827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.836222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.836254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.604 [2024-11-19 11:48:28.836264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:15.604 [2024-11-19 11:48:28.836277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.867696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.867756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:15.604 [2024-11-19 11:48:28.867770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.394 ms 00:17:15.604 [2024-11-19 11:48:28.867786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.876849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:15.604 [2024-11-19 11:48:28.901335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.901385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.604 [2024-11-19 11:48:28.901401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.445 ms 00:17:15.604 [2024-11-19 11:48:28.901425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.901522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.901544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:15.604 [2024-11-19 11:48:28.901558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:15.604 [2024-11-19 11:48:28.901576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.901642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.901653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.604 [2024-11-19 11:48:28.901669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:15.604 [2024-11-19 11:48:28.901682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.901723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.901733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:15.604 [2024-11-19 11:48:28.901747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:15.604 [2024-11-19 11:48:28.901755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.901804] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:15.604 [2024-11-19 11:48:28.901817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.604 [2024-11-19 11:48:28.901832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:15.604 [2024-11-19 11:48:28.901843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:15.604 [2024-11-19 11:48:28.901854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.604 [2024-11-19 11:48:28.908833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.605 [2024-11-19 11:48:28.908889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:15.605 [2024-11-19 11:48:28.908901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:17:15.605 [2024-11-19 11:48:28.908912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.605 [2024-11-19 11:48:28.909021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.605 [2024-11-19 11:48:28.909039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:15.605 [2024-11-19 11:48:28.909050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:15.605 [2024-11-19 11:48:28.909062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.605 [2024-11-19 11:48:28.910340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:15.605 [2024-11-19 11:48:28.911772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 192.138 ms, result 0 00:17:15.605 [2024-11-19 11:48:28.913770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:15.605 Some configs were skipped because the RPC state that can call them passed over. 00:17:15.605 11:48:28 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:15.866 [2024-11-19 11:48:29.147477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.866 [2024-11-19 11:48:29.147536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:15.866 [2024-11-19 11:48:29.147551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:17:15.866 [2024-11-19 11:48:29.147560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.866 [2024-11-19 11:48:29.147599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.183 ms, result 0 00:17:15.866 true 00:17:15.866 11:48:29 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:16.126 [2024-11-19 11:48:29.363373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.126 [2024-11-19 11:48:29.363444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:16.126 [2024-11-19 11:48:29.363456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:17:16.126 [2024-11-19 11:48:29.363467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.126 [2024-11-19 11:48:29.363505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.815 ms, result 0 00:17:16.126 true 00:17:16.126 11:48:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85390 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85390 ']' 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85390 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85390 00:17:16.126 killing process with pid 85390 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85390' 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85390 00:17:16.126 11:48:29 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85390 00:17:16.389 [2024-11-19 11:48:29.605953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.389 [2024-11-19 11:48:29.606031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:16.389 [2024-11-19 11:48:29.606049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:16.389 [2024-11-19 11:48:29.606059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.389 [2024-11-19 11:48:29.606091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:16.389 [2024-11-19 11:48:29.607046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.389 [2024-11-19 11:48:29.607091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:16.389 [2024-11-19 11:48:29.607106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:17:16.389 [2024-11-19 11:48:29.607119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.389 [2024-11-19 11:48:29.607464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.389 [2024-11-19 11:48:29.607489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:16.389 [2024-11-19 11:48:29.607499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:16.389 [2024-11-19 11:48:29.607510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.389 [2024-11-19 11:48:29.612233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.389 [2024-11-19 11:48:29.612287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:16.389 [2024-11-19 11:48:29.612300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:17:16.389 [2024-11-19 11:48:29.612311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.389 [2024-11-19 11:48:29.619296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.389 [2024-11-19 11:48:29.619355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:16.389 [2024-11-19 11:48:29.619372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.936 ms 00:17:16.389 [2024-11-19 11:48:29.619387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.389 [2024-11-19 11:48:29.622619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.622679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:16.390 [2024-11-19 11:48:29.622690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:17:16.390 [2024-11-19 11:48:29.622700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.628861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.628917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:16.390 [2024-11-19 11:48:29.628929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:17:16.390 [2024-11-19 11:48:29.628940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.629096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.629113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:16.390 [2024-11-19 11:48:29.629123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:16.390 [2024-11-19 11:48:29.629135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.632595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.632647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:16.390 [2024-11-19 11:48:29.632657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:17:16.390 [2024-11-19 11:48:29.632676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.635768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.635819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:16.390 [2024-11-19 11:48:29.635829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:17:16.390 [2024-11-19 11:48:29.635840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.638280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.638333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:16.390 [2024-11-19 11:48:29.638343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:17:16.390 [2024-11-19 11:48:29.638353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.640776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.390 [2024-11-19 11:48:29.640830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:16.390 [2024-11-19 11:48:29.640840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:17:16.390 [2024-11-19 11:48:29.640850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.390 [2024-11-19 11:48:29.640898] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:16.390 [2024-11-19 11:48:29.640918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.640937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.640951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.640960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.640971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.640980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.640990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:16.390 [2024-11-19 11:48:29.641594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:16.391 [2024-11-19 11:48:29.641931] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:16.391 [2024-11-19 11:48:29.641941] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:17:16.391 [2024-11-19 11:48:29.641952] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:16.391 [2024-11-19 11:48:29.641960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:16.391 [2024-11-19 11:48:29.641971] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:16.391 [2024-11-19 11:48:29.641982] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:16.391 [2024-11-19 11:48:29.641992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:16.391 [2024-11-19 11:48:29.642000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:16.391 [2024-11-19 11:48:29.642010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:16.391 [2024-11-19 11:48:29.642018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:16.391 [2024-11-19 11:48:29.642027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:16.391 [2024-11-19 11:48:29.642034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.391 [2024-11-19 11:48:29.642050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:16.391 [2024-11-19 11:48:29.642061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:17:16.391 [2024-11-19 11:48:29.642074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.645004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.391 [2024-11-19 11:48:29.645054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:16.391 [2024-11-19 11:48:29.645066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.892 ms 00:17:16.391 [2024-11-19 11:48:29.645077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.645246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.391 [2024-11-19 11:48:29.645268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:16.391 [2024-11-19 11:48:29.645279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:17:16.391 [2024-11-19 11:48:29.645289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.656162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.656246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:16.391 [2024-11-19 11:48:29.656258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.656269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.656367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.656382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:16.391 [2024-11-19 11:48:29.656392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.656430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.656489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.656508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:16.391 [2024-11-19 11:48:29.656520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.656531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.656553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.656566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:16.391 [2024-11-19 11:48:29.656573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.656585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.676641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.676725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:16.391 [2024-11-19 11:48:29.676737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.676747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.692107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.692202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:16.391 [2024-11-19 11:48:29.692215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.692230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.692308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.692334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.391 [2024-11-19 11:48:29.692346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.692363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.692448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.692463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.391 [2024-11-19 11:48:29.692472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.692484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.692573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.692587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.391 [2024-11-19 11:48:29.692601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.692612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.391 [2024-11-19 11:48:29.692654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.391 [2024-11-19 11:48:29.692669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:16.391 [2024-11-19 11:48:29.692680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.391 [2024-11-19 11:48:29.692694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.392 [2024-11-19 11:48:29.692750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.392 [2024-11-19 11:48:29.692764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.392 [2024-11-19 11:48:29.692775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.392 [2024-11-19 11:48:29.692787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.392 [2024-11-19 11:48:29.692854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:16.392 [2024-11-19 11:48:29.692870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.392 [2024-11-19 11:48:29.692881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:16.392 [2024-11-19 11:48:29.692894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.392 [2024-11-19 11:48:29.693093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 87.096 ms, result 0 00:17:16.653 11:48:30 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:16.653 11:48:30 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:16.911 [2024-11-19 11:48:30.068922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:16.911 [2024-11-19 11:48:30.069029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85432 ] 00:17:16.911 [2024-11-19 11:48:30.202859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.911 [2024-11-19 11:48:30.247493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.171 [2024-11-19 11:48:30.346142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.171 [2024-11-19 11:48:30.346198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:17.171 [2024-11-19 11:48:30.500448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.500488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.171 [2024-11-19 11:48:30.500499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.171 [2024-11-19 11:48:30.500509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.502342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.502376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.171 [2024-11-19 11:48:30.502386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.820 ms 00:17:17.171 [2024-11-19 11:48:30.502394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.502463] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.171 [2024-11-19 11:48:30.502695] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.171 [2024-11-19 11:48:30.502708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.502714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.171 [2024-11-19 11:48:30.502722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:17.171 [2024-11-19 11:48:30.502728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.504343] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:17.171 [2024-11-19 11:48:30.507049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.507076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:17.171 [2024-11-19 11:48:30.507088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.708 ms 00:17:17.171 [2024-11-19 11:48:30.507097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.507145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.507153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:17.171 [2024-11-19 11:48:30.507160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:17.171 [2024-11-19 11:48:30.507165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.513256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.513279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.171 [2024-11-19 11:48:30.513286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.060 ms 00:17:17.171 [2024-11-19 11:48:30.513295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.513390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.513400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.171 [2024-11-19 11:48:30.513418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:17.171 [2024-11-19 11:48:30.513427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.513449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.513457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.171 [2024-11-19 11:48:30.513467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:17.171 [2024-11-19 11:48:30.513475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.513495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.171 [2024-11-19 11:48:30.515035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.515056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.171 [2024-11-19 11:48:30.515064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:17:17.171 [2024-11-19 11:48:30.515070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.515100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.515111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.171 [2024-11-19 11:48:30.515119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:17.171 [2024-11-19 11:48:30.515127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.515142] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:17.171 [2024-11-19 11:48:30.515156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:17.171 [2024-11-19 11:48:30.515188] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:17.171 [2024-11-19 11:48:30.515201] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:17.171 [2024-11-19 11:48:30.515284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:17.171 [2024-11-19 11:48:30.515293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.171 [2024-11-19 11:48:30.515301] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:17.171 [2024-11-19 11:48:30.515311] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.171 [2024-11-19 11:48:30.515319] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.171 [2024-11-19 11:48:30.515325] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.171 [2024-11-19 11:48:30.515331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.171 [2024-11-19 11:48:30.515337] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:17.171 [2024-11-19 11:48:30.515343] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:17.171 [2024-11-19 11:48:30.515349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.515356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.171 [2024-11-19 11:48:30.515363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:17.171 [2024-11-19 11:48:30.515371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.515450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.171 [2024-11-19 11:48:30.515458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.171 [2024-11-19 11:48:30.515464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:17.171 [2024-11-19 11:48:30.515470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.171 [2024-11-19 11:48:30.515553] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.172 [2024-11-19 11:48:30.515566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.172 [2024-11-19 11:48:30.515573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.172 [2024-11-19 11:48:30.515595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.172 [2024-11-19 11:48:30.515611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.172 [2024-11-19 11:48:30.515624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.172 [2024-11-19 11:48:30.515629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.172 [2024-11-19 11:48:30.515635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.172 [2024-11-19 11:48:30.515641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.172 [2024-11-19 11:48:30.515647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:17.172 [2024-11-19 11:48:30.515652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.172 [2024-11-19 11:48:30.515662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.172 [2024-11-19 11:48:30.515678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.172 [2024-11-19 11:48:30.515697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.172 [2024-11-19 11:48:30.515718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.172 [2024-11-19 11:48:30.515736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.172 [2024-11-19 11:48:30.515753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.172 [2024-11-19 11:48:30.515766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.172 [2024-11-19 11:48:30.515772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:17.172 [2024-11-19 11:48:30.515778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.172 [2024-11-19 11:48:30.515784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:17.172 [2024-11-19 11:48:30.515789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:17.172 [2024-11-19 11:48:30.515796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:17.172 [2024-11-19 11:48:30.515809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:17.172 [2024-11-19 11:48:30.515815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515821] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.172 [2024-11-19 11:48:30.515829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.172 [2024-11-19 11:48:30.515839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.172 [2024-11-19 11:48:30.515853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.172 [2024-11-19 11:48:30.515859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.172 [2024-11-19 11:48:30.515865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.172 [2024-11-19 11:48:30.515870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.172 [2024-11-19 11:48:30.515876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.172 [2024-11-19 11:48:30.515882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.172 [2024-11-19 11:48:30.515889] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.172 [2024-11-19 11:48:30.515900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.515907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.172 [2024-11-19 11:48:30.515913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:17.172 [2024-11-19 11:48:30.515921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.172 [2024-11-19 11:48:30.515927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:17.172 [2024-11-19 11:48:30.515934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:17.172 [2024-11-19 11:48:30.515940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:17.172 [2024-11-19 11:48:30.515947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:17.172 [2024-11-19 11:48:30.515956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:17.172 [2024-11-19 11:48:30.515962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:17.172 [2024-11-19 11:48:30.515968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.515975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.515981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.515987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.515994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:17.172 [2024-11-19 11:48:30.515999] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.172 [2024-11-19 11:48:30.516011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.516020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.172 [2024-11-19 11:48:30.516026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.172 [2024-11-19 11:48:30.516034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.172 [2024-11-19 11:48:30.516040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.172 [2024-11-19 11:48:30.516047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.172 [2024-11-19 11:48:30.516056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.172 [2024-11-19 11:48:30.516064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:17:17.172 [2024-11-19 11:48:30.516069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.172 [2024-11-19 11:48:30.535829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.172 [2024-11-19 11:48:30.535878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.172 [2024-11-19 11:48:30.535896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.714 ms 00:17:17.172 [2024-11-19 11:48:30.535908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.172 [2024-11-19 11:48:30.536099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.172 [2024-11-19 11:48:30.536125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.172 [2024-11-19 11:48:30.536138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:17.172 [2024-11-19 11:48:30.536152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.172 [2024-11-19 11:48:30.546611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.172 [2024-11-19 11:48:30.546637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.172 [2024-11-19 11:48:30.546645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.419 ms 00:17:17.172 [2024-11-19 11:48:30.546651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.172 [2024-11-19 11:48:30.546699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.172 [2024-11-19 11:48:30.546707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.172 [2024-11-19 11:48:30.546715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:17.172 [2024-11-19 11:48:30.546721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.172 [2024-11-19 11:48:30.547099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.172 [2024-11-19 11:48:30.547122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.172 [2024-11-19 11:48:30.547132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:17.172 [2024-11-19 11:48:30.547139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.547249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.547257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.173 [2024-11-19 11:48:30.547266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:17.173 [2024-11-19 11:48:30.547274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.553033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.553057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.173 [2024-11-19 11:48:30.553065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.740 ms 00:17:17.173 [2024-11-19 11:48:30.553071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.556040] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:17.173 [2024-11-19 11:48:30.556072] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:17.173 [2024-11-19 11:48:30.556081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.556087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:17.173 [2024-11-19 11:48:30.556094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.941 ms 00:17:17.173 [2024-11-19 11:48:30.556100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.567783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.567809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:17.173 [2024-11-19 11:48:30.567818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.646 ms 00:17:17.173 [2024-11-19 11:48:30.567825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.569688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.569713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:17.173 [2024-11-19 11:48:30.569719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:17:17.173 [2024-11-19 11:48:30.569725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.571069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.571093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:17.173 [2024-11-19 11:48:30.571105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:17:17.173 [2024-11-19 11:48:30.571111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.173 [2024-11-19 11:48:30.571354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.173 [2024-11-19 11:48:30.571364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:17.173 [2024-11-19 11:48:30.571371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:17.173 [2024-11-19 11:48:30.571382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.589525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.589556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:17.432 [2024-11-19 11:48:30.589565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.128 ms 00:17:17.432 [2024-11-19 11:48:30.589572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.595642] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:17.432 [2024-11-19 11:48:30.610289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.610319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.432 [2024-11-19 11:48:30.610329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.669 ms 00:17:17.432 [2024-11-19 11:48:30.610336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.610433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.610442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:17.432 [2024-11-19 11:48:30.610450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:17.432 [2024-11-19 11:48:30.610464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.610515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.610523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:17.432 [2024-11-19 11:48:30.610530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:17.432 [2024-11-19 11:48:30.610536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.610554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.610564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:17.432 [2024-11-19 11:48:30.610570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:17.432 [2024-11-19 11:48:30.610576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.610606] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:17.432 [2024-11-19 11:48:30.610614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.610621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:17.432 [2024-11-19 11:48:30.610629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:17.432 [2024-11-19 11:48:30.610635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.614708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.614736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:17.432 [2024-11-19 11:48:30.614744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:17:17.432 [2024-11-19 11:48:30.614750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.614822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.432 [2024-11-19 11:48:30.614833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:17.432 [2024-11-19 11:48:30.614840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:17.432 [2024-11-19 11:48:30.614846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.432 [2024-11-19 11:48:30.615676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:17.432 [2024-11-19 11:48:30.616532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.976 ms, result 0 00:17:17.432 [2024-11-19 11:48:30.617628] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.432 [2024-11-19 11:48:30.627093] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:18.366  [2024-11-19T11:48:32.713Z] Copying: 15/256 [MB] (15 MBps) [2024-11-19T11:48:33.657Z] Copying: 27/256 [MB] (11 MBps) [2024-11-19T11:48:35.038Z] Copying: 38/256 [MB] (11 MBps) [2024-11-19T11:48:35.975Z] Copying: 49/256 [MB] (11 MBps) [2024-11-19T11:48:36.914Z] Copying: 61/256 [MB] (12 MBps) [2024-11-19T11:48:37.850Z] Copying: 72/256 [MB] (10 MBps) [2024-11-19T11:48:38.790Z] Copying: 84/256 [MB] (12 MBps) [2024-11-19T11:48:39.729Z] Copying: 96/256 [MB] (11 MBps) [2024-11-19T11:48:40.665Z] Copying: 107/256 [MB] (11 MBps) [2024-11-19T11:48:42.040Z] Copying: 119/256 [MB] (11 MBps) [2024-11-19T11:48:42.974Z] Copying: 131/256 [MB] (12 MBps) [2024-11-19T11:48:43.908Z] Copying: 143/256 [MB] (11 MBps) [2024-11-19T11:48:44.844Z] Copying: 155/256 [MB] (11 MBps) [2024-11-19T11:48:45.779Z] Copying: 166/256 [MB] (11 MBps) [2024-11-19T11:48:46.714Z] Copying: 178/256 [MB] (11 MBps) [2024-11-19T11:48:47.718Z] Copying: 190/256 [MB] (11 MBps) [2024-11-19T11:48:48.653Z] Copying: 202/256 [MB] (11 MBps) [2024-11-19T11:48:50.028Z] Copying: 214/256 [MB] (12 MBps) [2024-11-19T11:48:50.963Z] Copying: 226/256 [MB] (12 MBps) [2024-11-19T11:48:51.898Z] Copying: 238/256 [MB] (11 MBps) [2024-11-19T11:48:52.158Z] Copying: 249/256 [MB] (11 MBps) [2024-11-19T11:48:52.158Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-19 11:48:52.123122] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.746 [2024-11-19 11:48:52.124477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.124499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.746 [2024-11-19 11:48:52.124511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:38.746 [2024-11-19 11:48:52.124528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.124544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.746 [2024-11-19 11:48:52.125058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.125080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.746 [2024-11-19 11:48:52.125093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:17:38.746 [2024-11-19 11:48:52.125099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.125299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.125307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.746 [2024-11-19 11:48:52.125317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:17:38.746 [2024-11-19 11:48:52.125324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.128182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.128196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.746 [2024-11-19 11:48:52.128203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:17:38.746 [2024-11-19 11:48:52.128210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.133339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.133363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.746 [2024-11-19 11:48:52.133372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.116 ms 00:17:38.746 [2024-11-19 11:48:52.133383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.135682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.135709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.746 [2024-11-19 11:48:52.135717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.249 ms 00:17:38.746 [2024-11-19 11:48:52.135722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.139685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.139719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.746 [2024-11-19 11:48:52.139729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:17:38.746 [2024-11-19 11:48:52.139735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.139828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.139835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.746 [2024-11-19 11:48:52.139842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:38.746 [2024-11-19 11:48:52.139847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.142554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.142682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.746 [2024-11-19 11:48:52.142694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.694 ms 00:17:38.746 [2024-11-19 11:48:52.142700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.144758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.144784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.746 [2024-11-19 11:48:52.144790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:17:38.746 [2024-11-19 11:48:52.144796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.146469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.146493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.746 [2024-11-19 11:48:52.146499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:17:38.746 [2024-11-19 11:48:52.146504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.148359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.746 [2024-11-19 11:48:52.148384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.746 [2024-11-19 11:48:52.148391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:17:38.746 [2024-11-19 11:48:52.148396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.746 [2024-11-19 11:48:52.148431] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.746 [2024-11-19 11:48:52.148450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.746 [2024-11-19 11:48:52.148458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.746 [2024-11-19 11:48:52.148464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.746 [2024-11-19 11:48:52.148470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.746 [2024-11-19 11:48:52.148476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.746 [2024-11-19 11:48:52.148481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.746 [2024-11-19 11:48:52.148487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.148996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.149002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.747 [2024-11-19 11:48:52.149008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.748 [2024-11-19 11:48:52.149013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.748 [2024-11-19 11:48:52.149019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.748 [2024-11-19 11:48:52.149025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.748 [2024-11-19 11:48:52.149031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.748 [2024-11-19 11:48:52.149044] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.748 [2024-11-19 11:48:52.149050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:17:38.748 [2024-11-19 11:48:52.149059] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.748 [2024-11-19 11:48:52.149066] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.748 [2024-11-19 11:48:52.149072] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.748 [2024-11-19 11:48:52.149079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.748 [2024-11-19 11:48:52.149084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.748 [2024-11-19 11:48:52.149090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.748 [2024-11-19 11:48:52.149096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.748 [2024-11-19 11:48:52.149100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.748 [2024-11-19 11:48:52.149105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.748 [2024-11-19 11:48:52.149111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.748 [2024-11-19 11:48:52.149119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.748 [2024-11-19 11:48:52.149129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:17:38.748 [2024-11-19 11:48:52.149134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.748 [2024-11-19 11:48:52.150833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.748 [2024-11-19 11:48:52.150852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.748 [2024-11-19 11:48:52.150860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:17:38.748 [2024-11-19 11:48:52.150866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.748 [2024-11-19 11:48:52.150958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.748 [2024-11-19 11:48:52.150967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.748 [2024-11-19 11:48:52.150973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:38.748 [2024-11-19 11:48:52.150979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.156290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.156316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.007 [2024-11-19 11:48:52.156323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.156329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.156391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.156403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.007 [2024-11-19 11:48:52.156423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.156429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.156460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.156469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.007 [2024-11-19 11:48:52.156476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.156484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.156497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.156504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.007 [2024-11-19 11:48:52.156512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.156518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.167164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.167199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.007 [2024-11-19 11:48:52.167208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.167214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.175619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.175660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.007 [2024-11-19 11:48:52.175668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.175675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.175700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.175707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.007 [2024-11-19 11:48:52.175713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.175720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.175746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.175753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.007 [2024-11-19 11:48:52.175759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.175767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.175828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.175837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.007 [2024-11-19 11:48:52.175844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.175850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.175873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.175880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.007 [2024-11-19 11:48:52.175886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.175896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.175940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.175947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.007 [2024-11-19 11:48:52.175954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.175960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.176001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.007 [2024-11-19 11:48:52.176010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.007 [2024-11-19 11:48:52.176016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.007 [2024-11-19 11:48:52.176025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.007 [2024-11-19 11:48:52.176150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.649 ms, result 0 00:17:39.007 00:17:39.007 00:17:39.007 11:48:52 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:39.007 11:48:52 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:39.580 11:48:52 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.842 [2024-11-19 11:48:53.005296] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:39.842 [2024-11-19 11:48:53.005912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85675 ] 00:17:39.842 [2024-11-19 11:48:53.142193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.842 [2024-11-19 11:48:53.215324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.103 [2024-11-19 11:48:53.365720] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.103 [2024-11-19 11:48:53.365820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.367 [2024-11-19 11:48:53.531009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.531079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.367 [2024-11-19 11:48:53.531096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:40.367 [2024-11-19 11:48:53.531106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.533901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.534174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.367 [2024-11-19 11:48:53.534202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:17:40.367 [2024-11-19 11:48:53.534212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.534542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.367 [2024-11-19 11:48:53.534861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.367 [2024-11-19 11:48:53.534895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.534910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.367 [2024-11-19 11:48:53.534924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:17:40.367 [2024-11-19 11:48:53.534933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.537272] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.367 [2024-11-19 11:48:53.542500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.542547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.367 [2024-11-19 11:48:53.542565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.230 ms 00:17:40.367 [2024-11-19 11:48:53.542578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.542667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.542679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.367 [2024-11-19 11:48:53.542695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:40.367 [2024-11-19 11:48:53.542704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.554228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.554273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.367 [2024-11-19 11:48:53.554285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.472 ms 00:17:40.367 [2024-11-19 11:48:53.554293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.554484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.554502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.367 [2024-11-19 11:48:53.554512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:40.367 [2024-11-19 11:48:53.554520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.554549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.554559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.367 [2024-11-19 11:48:53.554572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.367 [2024-11-19 11:48:53.554579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.554603] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.367 [2024-11-19 11:48:53.557322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.557544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.367 [2024-11-19 11:48:53.557562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:17:40.367 [2024-11-19 11:48:53.557577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.557638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.557650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.367 [2024-11-19 11:48:53.557663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:40.367 [2024-11-19 11:48:53.557671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.557698] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.367 [2024-11-19 11:48:53.557723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.367 [2024-11-19 11:48:53.557777] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.367 [2024-11-19 11:48:53.557794] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.367 [2024-11-19 11:48:53.557909] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.367 [2024-11-19 11:48:53.557923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.367 [2024-11-19 11:48:53.557934] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.367 [2024-11-19 11:48:53.557949] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.367 [2024-11-19 11:48:53.557959] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.367 [2024-11-19 11:48:53.557974] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.367 [2024-11-19 11:48:53.557985] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.367 [2024-11-19 11:48:53.557993] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.367 [2024-11-19 11:48:53.558001] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.367 [2024-11-19 11:48:53.558011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.558022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.367 [2024-11-19 11:48:53.558038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:17:40.367 [2024-11-19 11:48:53.558047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.558137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.367 [2024-11-19 11:48:53.558148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.367 [2024-11-19 11:48:53.558159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:40.367 [2024-11-19 11:48:53.558171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.367 [2024-11-19 11:48:53.558274] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.368 [2024-11-19 11:48:53.558286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.368 [2024-11-19 11:48:53.558297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.368 [2024-11-19 11:48:53.558327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.368 [2024-11-19 11:48:53.558359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.368 [2024-11-19 11:48:53.558377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.368 [2024-11-19 11:48:53.558385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.368 [2024-11-19 11:48:53.558394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.368 [2024-11-19 11:48:53.558402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.368 [2024-11-19 11:48:53.558429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.368 [2024-11-19 11:48:53.558437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.368 [2024-11-19 11:48:53.558455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.368 [2024-11-19 11:48:53.558477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.368 [2024-11-19 11:48:53.558501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.368 [2024-11-19 11:48:53.558528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.368 [2024-11-19 11:48:53.558549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.368 [2024-11-19 11:48:53.558572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.368 [2024-11-19 11:48:53.558585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.368 [2024-11-19 11:48:53.558592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.368 [2024-11-19 11:48:53.558598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.368 [2024-11-19 11:48:53.558605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.368 [2024-11-19 11:48:53.558612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.368 [2024-11-19 11:48:53.558620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.368 [2024-11-19 11:48:53.558637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.368 [2024-11-19 11:48:53.558643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558651] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.368 [2024-11-19 11:48:53.558658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.368 [2024-11-19 11:48:53.558671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.368 [2024-11-19 11:48:53.558691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.368 [2024-11-19 11:48:53.558700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.368 [2024-11-19 11:48:53.558708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.368 [2024-11-19 11:48:53.558715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.368 [2024-11-19 11:48:53.558723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.368 [2024-11-19 11:48:53.558730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.368 [2024-11-19 11:48:53.558739] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.368 [2024-11-19 11:48:53.558754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.368 [2024-11-19 11:48:53.558772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.368 [2024-11-19 11:48:53.558782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.368 [2024-11-19 11:48:53.558789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.368 [2024-11-19 11:48:53.558796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.368 [2024-11-19 11:48:53.558803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.368 [2024-11-19 11:48:53.558810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.368 [2024-11-19 11:48:53.558825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.368 [2024-11-19 11:48:53.558832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.368 [2024-11-19 11:48:53.558841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.368 [2024-11-19 11:48:53.558878] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.368 [2024-11-19 11:48:53.558892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.368 [2024-11-19 11:48:53.558913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.368 [2024-11-19 11:48:53.558920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.368 [2024-11-19 11:48:53.558928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.368 [2024-11-19 11:48:53.558935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.368 [2024-11-19 11:48:53.558943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.368 [2024-11-19 11:48:53.558955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:17:40.368 [2024-11-19 11:48:53.558964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.368 [2024-11-19 11:48:53.589534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.368 [2024-11-19 11:48:53.589779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.368 [2024-11-19 11:48:53.590098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.493 ms 00:17:40.368 [2024-11-19 11:48:53.590140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.368 [2024-11-19 11:48:53.590503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.368 [2024-11-19 11:48:53.590757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.368 [2024-11-19 11:48:53.590822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:40.368 [2024-11-19 11:48:53.590857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.368 [2024-11-19 11:48:53.607067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.368 [2024-11-19 11:48:53.607254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.368 [2024-11-19 11:48:53.607314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.150 ms 00:17:40.368 [2024-11-19 11:48:53.607338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.368 [2024-11-19 11:48:53.607463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.368 [2024-11-19 11:48:53.607494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.368 [2024-11-19 11:48:53.607521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.368 [2024-11-19 11:48:53.607540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.368 [2024-11-19 11:48:53.608280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.368 [2024-11-19 11:48:53.608605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.368 [2024-11-19 11:48:53.608686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:17:40.368 [2024-11-19 11:48:53.608722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.368 [2024-11-19 11:48:53.608932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.608958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.369 [2024-11-19 11:48:53.608980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:40.369 [2024-11-19 11:48:53.609004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.619477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.619707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.369 [2024-11-19 11:48:53.619796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.436 ms 00:17:40.369 [2024-11-19 11:48:53.619821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.624775] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:40.369 [2024-11-19 11:48:53.624957] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.369 [2024-11-19 11:48:53.625023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.625046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.369 [2024-11-19 11:48:53.625066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:17:40.369 [2024-11-19 11:48:53.625086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.641730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.641909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.369 [2024-11-19 11:48:53.641971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.556 ms 00:17:40.369 [2024-11-19 11:48:53.641994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.645651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.645811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.369 [2024-11-19 11:48:53.645867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:17:40.369 [2024-11-19 11:48:53.645890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.648942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.649119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.369 [2024-11-19 11:48:53.649147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.887 ms 00:17:40.369 [2024-11-19 11:48:53.649155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.649538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.649558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.369 [2024-11-19 11:48:53.649574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:40.369 [2024-11-19 11:48:53.649585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.681456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.681507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:40.369 [2024-11-19 11:48:53.681521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.844 ms 00:17:40.369 [2024-11-19 11:48:53.681530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.689986] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.369 [2024-11-19 11:48:53.715170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.715231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.369 [2024-11-19 11:48:53.715246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.543 ms 00:17:40.369 [2024-11-19 11:48:53.715255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.715373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.715387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:40.369 [2024-11-19 11:48:53.715398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:40.369 [2024-11-19 11:48:53.715454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.715535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.715546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.369 [2024-11-19 11:48:53.715556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:40.369 [2024-11-19 11:48:53.715565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.715610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.715622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.369 [2024-11-19 11:48:53.715631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:40.369 [2024-11-19 11:48:53.715641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.715686] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:40.369 [2024-11-19 11:48:53.715698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.715710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:40.369 [2024-11-19 11:48:53.715720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:40.369 [2024-11-19 11:48:53.715728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.722740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.722980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.369 [2024-11-19 11:48:53.723011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.986 ms 00:17:40.369 [2024-11-19 11:48:53.723022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.723165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.369 [2024-11-19 11:48:53.723182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.369 [2024-11-19 11:48:53.723193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:40.369 [2024-11-19 11:48:53.723208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.369 [2024-11-19 11:48:53.724537] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.369 [2024-11-19 11:48:53.726002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 193.106 ms, result 0 00:17:40.369 [2024-11-19 11:48:53.727574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.369 [2024-11-19 11:48:53.735076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.944  [2024-11-19T11:48:54.356Z] Copying: 4096/4096 [kB] (average 9990 kBps)[2024-11-19 11:48:54.146552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.944 [2024-11-19 11:48:54.147756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.147808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:40.944 [2024-11-19 11:48:54.147827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:40.944 [2024-11-19 11:48:54.147837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.147860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:40.944 [2024-11-19 11:48:54.148859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.149134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:40.944 [2024-11-19 11:48:54.149166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:17:40.944 [2024-11-19 11:48:54.149175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.152275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.152473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:40.944 [2024-11-19 11:48:54.152495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:17:40.944 [2024-11-19 11:48:54.152504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.157369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.157437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:40.944 [2024-11-19 11:48:54.157450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.832 ms 00:17:40.944 [2024-11-19 11:48:54.157460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.164466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.164506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:40.944 [2024-11-19 11:48:54.164518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.968 ms 00:17:40.944 [2024-11-19 11:48:54.164526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.167255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.167466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:40.944 [2024-11-19 11:48:54.167485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:17:40.944 [2024-11-19 11:48:54.167493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.173654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.173817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:40.944 [2024-11-19 11:48:54.174122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.076 ms 00:17:40.944 [2024-11-19 11:48:54.174157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.174301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.174330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:40.944 [2024-11-19 11:48:54.174434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:40.944 [2024-11-19 11:48:54.174464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.178293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.178484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:40.944 [2024-11-19 11:48:54.178596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.785 ms 00:17:40.944 [2024-11-19 11:48:54.178621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.181784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.181942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:40.944 [2024-11-19 11:48:54.182007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:17:40.944 [2024-11-19 11:48:54.182029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.184511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.184668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:40.944 [2024-11-19 11:48:54.184721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:17:40.944 [2024-11-19 11:48:54.184742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.187236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.944 [2024-11-19 11:48:54.187391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:40.944 [2024-11-19 11:48:54.187461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:17:40.944 [2024-11-19 11:48:54.187484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.944 [2024-11-19 11:48:54.187533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:40.944 [2024-11-19 11:48:54.187573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.187909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.188010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.188041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.188071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:40.944 [2024-11-19 11:48:54.188100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.188921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.189977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.190973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:40.945 [2024-11-19 11:48:54.191317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:40.945 [2024-11-19 11:48:54.191338] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:17:40.945 [2024-11-19 11:48:54.191368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:40.946 [2024-11-19 11:48:54.191389] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:40.946 [2024-11-19 11:48:54.191433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:40.946 [2024-11-19 11:48:54.191455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:40.946 [2024-11-19 11:48:54.191465] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:40.946 [2024-11-19 11:48:54.191474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:40.946 [2024-11-19 11:48:54.191483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:40.946 [2024-11-19 11:48:54.191491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:40.946 [2024-11-19 11:48:54.191498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:40.946 [2024-11-19 11:48:54.191507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.946 [2024-11-19 11:48:54.191516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:40.946 [2024-11-19 11:48:54.191533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.976 ms 00:17:40.946 [2024-11-19 11:48:54.191541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.194538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.946 [2024-11-19 11:48:54.194581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:40.946 [2024-11-19 11:48:54.194592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.969 ms 00:17:40.946 [2024-11-19 11:48:54.194602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.194752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.946 [2024-11-19 11:48:54.194771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:40.946 [2024-11-19 11:48:54.194780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:40.946 [2024-11-19 11:48:54.194788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.204572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.204773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.946 [2024-11-19 11:48:54.204791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.204800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.204893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.204907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.946 [2024-11-19 11:48:54.204916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.204926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.204981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.204992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.946 [2024-11-19 11:48:54.205000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.205008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.205027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.205041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.946 [2024-11-19 11:48:54.205053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.205060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.225257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.225323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.946 [2024-11-19 11:48:54.225338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.225347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.946 [2024-11-19 11:48:54.241168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.946 [2024-11-19 11:48:54.241280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.946 [2024-11-19 11:48:54.241348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.946 [2024-11-19 11:48:54.241506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:40.946 [2024-11-19 11:48:54.241578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.946 [2024-11-19 11:48:54.241679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.946 [2024-11-19 11:48:54.241763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.946 [2024-11-19 11:48:54.241779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.946 [2024-11-19 11:48:54.241794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.946 [2024-11-19 11:48:54.241981] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 94.200 ms, result 0 00:17:41.208 00:17:41.208 00:17:41.208 11:48:54 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85702 00:17:41.208 11:48:54 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85702 00:17:41.208 11:48:54 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:41.208 11:48:54 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85702 ']' 00:17:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.208 11:48:54 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.208 11:48:54 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.208 11:48:54 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.208 11:48:54 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.208 11:48:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:41.470 [2024-11-19 11:48:54.642479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:41.470 [2024-11-19 11:48:54.642648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85702 ] 00:17:41.470 [2024-11-19 11:48:54.779293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.470 [2024-11-19 11:48:54.852078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.415 11:48:55 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.415 11:48:55 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:42.415 11:48:55 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:42.415 [2024-11-19 11:48:55.707306] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.415 [2024-11-19 11:48:55.707655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.678 [2024-11-19 11:48:55.886215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.886280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.678 [2024-11-19 11:48:55.886299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:42.678 [2024-11-19 11:48:55.886310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.889086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.889145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.678 [2024-11-19 11:48:55.889159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:17:42.678 [2024-11-19 11:48:55.889169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.889270] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.678 [2024-11-19 11:48:55.889577] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.678 [2024-11-19 11:48:55.889595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.889610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.678 [2024-11-19 11:48:55.889630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:17:42.678 [2024-11-19 11:48:55.889642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.892013] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.678 [2024-11-19 11:48:55.896666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.896722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.678 [2024-11-19 11:48:55.896736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:17:42.678 [2024-11-19 11:48:55.896745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.896834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.896845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.678 [2024-11-19 11:48:55.896860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:42.678 [2024-11-19 11:48:55.896867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.908669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.908920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.678 [2024-11-19 11:48:55.908950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.737 ms 00:17:42.678 [2024-11-19 11:48:55.908961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.909111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.909129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.678 [2024-11-19 11:48:55.909143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:42.678 [2024-11-19 11:48:55.909150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.909183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.909193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.678 [2024-11-19 11:48:55.909208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:42.678 [2024-11-19 11:48:55.909218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.909245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:42.678 [2024-11-19 11:48:55.911991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.912190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.678 [2024-11-19 11:48:55.912210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.754 ms 00:17:42.678 [2024-11-19 11:48:55.912223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.912311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.912325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.678 [2024-11-19 11:48:55.912336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:42.678 [2024-11-19 11:48:55.912348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.912372] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.678 [2024-11-19 11:48:55.912402] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:42.678 [2024-11-19 11:48:55.912463] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.678 [2024-11-19 11:48:55.912494] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:42.678 [2024-11-19 11:48:55.912609] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:42.678 [2024-11-19 11:48:55.912625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.678 [2024-11-19 11:48:55.912642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:42.678 [2024-11-19 11:48:55.912657] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.678 [2024-11-19 11:48:55.912666] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.678 [2024-11-19 11:48:55.912684] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:42.678 [2024-11-19 11:48:55.912696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.678 [2024-11-19 11:48:55.912707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:42.678 [2024-11-19 11:48:55.912715] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:42.678 [2024-11-19 11:48:55.912727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.912737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.678 [2024-11-19 11:48:55.912748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:17:42.678 [2024-11-19 11:48:55.912755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.912845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.678 [2024-11-19 11:48:55.912856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.678 [2024-11-19 11:48:55.912866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:42.678 [2024-11-19 11:48:55.912874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.678 [2024-11-19 11:48:55.912979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.678 [2024-11-19 11:48:55.912990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.678 [2024-11-19 11:48:55.913007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.678 [2024-11-19 11:48:55.913037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.678 [2024-11-19 11:48:55.913067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.678 [2024-11-19 11:48:55.913085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.678 [2024-11-19 11:48:55.913092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:42.678 [2024-11-19 11:48:55.913102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.678 [2024-11-19 11:48:55.913109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.678 [2024-11-19 11:48:55.913119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:42.678 [2024-11-19 11:48:55.913127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.678 [2024-11-19 11:48:55.913146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.678 [2024-11-19 11:48:55.913178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.678 [2024-11-19 11:48:55.913203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.678 [2024-11-19 11:48:55.913230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.678 [2024-11-19 11:48:55.913253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:42.678 [2024-11-19 11:48:55.913261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.678 [2024-11-19 11:48:55.913270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.678 [2024-11-19 11:48:55.913279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:42.679 [2024-11-19 11:48:55.913286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.679 [2024-11-19 11:48:55.913295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.679 [2024-11-19 11:48:55.913303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:42.679 [2024-11-19 11:48:55.913314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.679 [2024-11-19 11:48:55.913322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:42.679 [2024-11-19 11:48:55.913330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:42.679 [2024-11-19 11:48:55.913338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.679 [2024-11-19 11:48:55.913346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:42.679 [2024-11-19 11:48:55.913355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:42.679 [2024-11-19 11:48:55.913364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.679 [2024-11-19 11:48:55.913372] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.679 [2024-11-19 11:48:55.913382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.679 [2024-11-19 11:48:55.913390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.679 [2024-11-19 11:48:55.913399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.679 [2024-11-19 11:48:55.913425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.679 [2024-11-19 11:48:55.913436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.679 [2024-11-19 11:48:55.913443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.679 [2024-11-19 11:48:55.913454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.679 [2024-11-19 11:48:55.913460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.679 [2024-11-19 11:48:55.913473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.679 [2024-11-19 11:48:55.913482] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.679 [2024-11-19 11:48:55.913499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:42.679 [2024-11-19 11:48:55.913518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:42.679 [2024-11-19 11:48:55.913525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:42.679 [2024-11-19 11:48:55.913535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:42.679 [2024-11-19 11:48:55.913544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:42.679 [2024-11-19 11:48:55.913554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:42.679 [2024-11-19 11:48:55.913561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:42.679 [2024-11-19 11:48:55.913571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:42.679 [2024-11-19 11:48:55.913579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:42.679 [2024-11-19 11:48:55.913588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:42.679 [2024-11-19 11:48:55.913644] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.679 [2024-11-19 11:48:55.913655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.679 [2024-11-19 11:48:55.913674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.679 [2024-11-19 11:48:55.913681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.679 [2024-11-19 11:48:55.913691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.679 [2024-11-19 11:48:55.913701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.913718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.679 [2024-11-19 11:48:55.913729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:17:42.679 [2024-11-19 11:48:55.913743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.934643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.934831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.679 [2024-11-19 11:48:55.934951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.813 ms 00:17:42.679 [2024-11-19 11:48:55.934981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.935141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.935176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.679 [2024-11-19 11:48:55.935268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:42.679 [2024-11-19 11:48:55.935302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.952072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.952276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.679 [2024-11-19 11:48:55.952584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.727 ms 00:17:42.679 [2024-11-19 11:48:55.952742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.952847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.952935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.679 [2024-11-19 11:48:55.952964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.679 [2024-11-19 11:48:55.952987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.953738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.953883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.679 [2024-11-19 11:48:55.953943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:17:42.679 [2024-11-19 11:48:55.953972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.954190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.954227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.679 [2024-11-19 11:48:55.954255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:42.679 [2024-11-19 11:48:55.954284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.976442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.976659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.679 [2024-11-19 11:48:55.976730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.118 ms 00:17:42.679 [2024-11-19 11:48:55.976760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.981708] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:42.679 [2024-11-19 11:48:55.981908] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.679 [2024-11-19 11:48:55.981988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.982015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.679 [2024-11-19 11:48:55.982037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.050 ms 00:17:42.679 [2024-11-19 11:48:55.982060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:55.998882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:55.999076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.679 [2024-11-19 11:48:55.999140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.729 ms 00:17:42.679 [2024-11-19 11:48:55.999171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:56.002561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:56.002727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.679 [2024-11-19 11:48:56.002784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:17:42.679 [2024-11-19 11:48:56.002811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:56.005665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:56.005828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.679 [2024-11-19 11:48:56.005886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:17:42.679 [2024-11-19 11:48:56.005912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:56.006288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:56.006338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.679 [2024-11-19 11:48:56.006450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:42.679 [2024-11-19 11:48:56.006489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.679 [2024-11-19 11:48:56.038306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.679 [2024-11-19 11:48:56.038542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.679 [2024-11-19 11:48:56.038722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.714 ms 00:17:42.680 [2024-11-19 11:48:56.038764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.047666] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.680 [2024-11-19 11:48:56.073151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.073352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.680 [2024-11-19 11:48:56.073377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.275 ms 00:17:42.680 [2024-11-19 11:48:56.073388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.073528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.073554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.680 [2024-11-19 11:48:56.073566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:42.680 [2024-11-19 11:48:56.073581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.073660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.073671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.680 [2024-11-19 11:48:56.073688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:42.680 [2024-11-19 11:48:56.073697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.073735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.073745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.680 [2024-11-19 11:48:56.073760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:42.680 [2024-11-19 11:48:56.073774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.073817] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.680 [2024-11-19 11:48:56.073833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.073844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.680 [2024-11-19 11:48:56.073853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:42.680 [2024-11-19 11:48:56.073864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.080907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.080969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.680 [2024-11-19 11:48:56.080981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.020 ms 00:17:42.680 [2024-11-19 11:48:56.080994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.081105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.680 [2024-11-19 11:48:56.081120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.680 [2024-11-19 11:48:56.081130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:42.680 [2024-11-19 11:48:56.081141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.680 [2024-11-19 11:48:56.082569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.680 [2024-11-19 11:48:56.084044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 195.958 ms, result 0 00:17:42.680 [2024-11-19 11:48:56.086395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.942 Some configs were skipped because the RPC state that can call them passed over. 00:17:42.942 11:48:56 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:42.942 [2024-11-19 11:48:56.311855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.942 [2024-11-19 11:48:56.312050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:42.942 [2024-11-19 11:48:56.312078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:17:42.942 [2024-11-19 11:48:56.312093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.942 [2024-11-19 11:48:56.312143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.392 ms, result 0 00:17:42.942 true 00:17:42.942 11:48:56 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:43.204 [2024-11-19 11:48:56.515726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.204 [2024-11-19 11:48:56.515783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:43.204 [2024-11-19 11:48:56.515795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:17:43.204 [2024-11-19 11:48:56.515805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.204 [2024-11-19 11:48:56.515842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.863 ms, result 0 00:17:43.204 true 00:17:43.204 11:48:56 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85702 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85702 ']' 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85702 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85702 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85702' 00:17:43.204 killing process with pid 85702 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85702 00:17:43.204 11:48:56 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85702 00:17:43.467 [2024-11-19 11:48:56.763944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.764276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:43.467 [2024-11-19 11:48:56.764481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:43.467 [2024-11-19 11:48:56.764514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.764587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:43.467 [2024-11-19 11:48:56.765613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.765785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:43.467 [2024-11-19 11:48:56.765807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:17:43.467 [2024-11-19 11:48:56.765827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.766153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.766170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:43.467 [2024-11-19 11:48:56.766180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:43.467 [2024-11-19 11:48:56.766191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.770552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.770724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:43.467 [2024-11-19 11:48:56.770744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:17:43.467 [2024-11-19 11:48:56.770755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.777849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.778031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:43.467 [2024-11-19 11:48:56.778053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.987 ms 00:17:43.467 [2024-11-19 11:48:56.778067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.781053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.781105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:43.467 [2024-11-19 11:48:56.781115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:17:43.467 [2024-11-19 11:48:56.781126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.787834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.788005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:43.467 [2024-11-19 11:48:56.788063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.653 ms 00:17:43.467 [2024-11-19 11:48:56.788089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.788274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.788308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:43.467 [2024-11-19 11:48:56.788331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:43.467 [2024-11-19 11:48:56.788353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.792223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.792458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:43.467 [2024-11-19 11:48:56.792528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:17:43.467 [2024-11-19 11:48:56.792559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.795404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.795585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:43.467 [2024-11-19 11:48:56.795648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:17:43.467 [2024-11-19 11:48:56.795673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.797973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.798132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:43.467 [2024-11-19 11:48:56.798189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:17:43.467 [2024-11-19 11:48:56.798213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.800623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.467 [2024-11-19 11:48:56.800784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:43.467 [2024-11-19 11:48:56.800841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.283 ms 00:17:43.467 [2024-11-19 11:48:56.800866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.467 [2024-11-19 11:48:56.800934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:43.467 [2024-11-19 11:48:56.800971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.801951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:43.467 [2024-11-19 11:48:56.802252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.802945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.803931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:43.468 [2024-11-19 11:48:56.804670] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:43.468 [2024-11-19 11:48:56.804680] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:17:43.468 [2024-11-19 11:48:56.804692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:43.468 [2024-11-19 11:48:56.804702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:43.468 [2024-11-19 11:48:56.804712] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:43.468 [2024-11-19 11:48:56.804726] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:43.468 [2024-11-19 11:48:56.804736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:43.468 [2024-11-19 11:48:56.804747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:43.468 [2024-11-19 11:48:56.804758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:43.468 [2024-11-19 11:48:56.804765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:43.468 [2024-11-19 11:48:56.804775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:43.468 [2024-11-19 11:48:56.804785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.468 [2024-11-19 11:48:56.804799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:43.468 [2024-11-19 11:48:56.804810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.853 ms 00:17:43.468 [2024-11-19 11:48:56.804824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.468 [2024-11-19 11:48:56.807834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.468 [2024-11-19 11:48:56.807894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:43.468 [2024-11-19 11:48:56.807906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:17:43.468 [2024-11-19 11:48:56.807918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.468 [2024-11-19 11:48:56.808075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:43.468 [2024-11-19 11:48:56.808089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:43.469 [2024-11-19 11:48:56.808101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:43.469 [2024-11-19 11:48:56.808112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.819318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.819372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:43.469 [2024-11-19 11:48:56.819384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.819395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.819515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.819531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:43.469 [2024-11-19 11:48:56.819541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.819555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.819605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.819619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:43.469 [2024-11-19 11:48:56.819632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.819642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.819663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.819675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:43.469 [2024-11-19 11:48:56.819684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.819695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.840018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.840074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:43.469 [2024-11-19 11:48:56.840086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.840098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:43.469 [2024-11-19 11:48:56.855315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:43.469 [2024-11-19 11:48:56.855461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:43.469 [2024-11-19 11:48:56.855542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:43.469 [2024-11-19 11:48:56.855674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:43.469 [2024-11-19 11:48:56.855750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:43.469 [2024-11-19 11:48:56.855842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.855922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:43.469 [2024-11-19 11:48:56.855937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:43.469 [2024-11-19 11:48:56.855948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:43.469 [2024-11-19 11:48:56.855963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:43.469 [2024-11-19 11:48:56.856151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 92.170 ms, result 0 00:17:44.043 11:48:57 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:44.043 [2024-11-19 11:48:57.275202] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:44.043 [2024-11-19 11:48:57.275334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85741 ] 00:17:44.043 [2024-11-19 11:48:57.413465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.303 [2024-11-19 11:48:57.484725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.303 [2024-11-19 11:48:57.635192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.303 [2024-11-19 11:48:57.635283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.565 [2024-11-19 11:48:57.798958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.799019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.565 [2024-11-19 11:48:57.799037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.565 [2024-11-19 11:48:57.799046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.801829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.801882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.565 [2024-11-19 11:48:57.801900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:17:44.565 [2024-11-19 11:48:57.801909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.802015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.565 [2024-11-19 11:48:57.802328] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.565 [2024-11-19 11:48:57.802360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.802369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.565 [2024-11-19 11:48:57.802384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:44.565 [2024-11-19 11:48:57.802392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.804782] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.565 [2024-11-19 11:48:57.809653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.809703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.565 [2024-11-19 11:48:57.809721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:17:44.565 [2024-11-19 11:48:57.809733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.809823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.809835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.565 [2024-11-19 11:48:57.809844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:44.565 [2024-11-19 11:48:57.809854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.821448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.821490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.565 [2024-11-19 11:48:57.821502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.549 ms 00:17:44.565 [2024-11-19 11:48:57.821510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.821657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.821669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.565 [2024-11-19 11:48:57.821679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:44.565 [2024-11-19 11:48:57.821687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.821716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.821725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.565 [2024-11-19 11:48:57.821737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:44.565 [2024-11-19 11:48:57.821744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.821767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:44.565 [2024-11-19 11:48:57.824521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.824559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.565 [2024-11-19 11:48:57.824570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:17:44.565 [2024-11-19 11:48:57.824579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.824630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.565 [2024-11-19 11:48:57.824645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.565 [2024-11-19 11:48:57.824665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:44.565 [2024-11-19 11:48:57.824674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.565 [2024-11-19 11:48:57.824694] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.565 [2024-11-19 11:48:57.824719] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.565 [2024-11-19 11:48:57.824765] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.565 [2024-11-19 11:48:57.824784] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.565 [2024-11-19 11:48:57.824901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.565 [2024-11-19 11:48:57.824914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.565 [2024-11-19 11:48:57.824926] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.565 [2024-11-19 11:48:57.824936] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.565 [2024-11-19 11:48:57.824946] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.566 [2024-11-19 11:48:57.824956] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:44.566 [2024-11-19 11:48:57.824964] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.566 [2024-11-19 11:48:57.824972] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.566 [2024-11-19 11:48:57.824985] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.566 [2024-11-19 11:48:57.824993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-19 11:48:57.825003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.566 [2024-11-19 11:48:57.825014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:44.566 [2024-11-19 11:48:57.825022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-19 11:48:57.825111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-19 11:48:57.825121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.566 [2024-11-19 11:48:57.825129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:44.566 [2024-11-19 11:48:57.825138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-19 11:48:57.825239] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.566 [2024-11-19 11:48:57.825253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.566 [2024-11-19 11:48:57.825263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.566 [2024-11-19 11:48:57.825297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.566 [2024-11-19 11:48:57.825328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.566 [2024-11-19 11:48:57.825345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.566 [2024-11-19 11:48:57.825353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:44.566 [2024-11-19 11:48:57.825363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.566 [2024-11-19 11:48:57.825371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.566 [2024-11-19 11:48:57.825379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:44.566 [2024-11-19 11:48:57.825387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.566 [2024-11-19 11:48:57.825403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.566 [2024-11-19 11:48:57.825452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.566 [2024-11-19 11:48:57.825478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.566 [2024-11-19 11:48:57.825511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.566 [2024-11-19 11:48:57.825536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.566 [2024-11-19 11:48:57.825559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.566 [2024-11-19 11:48:57.825572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.566 [2024-11-19 11:48:57.825579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:44.566 [2024-11-19 11:48:57.825587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.566 [2024-11-19 11:48:57.825595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.566 [2024-11-19 11:48:57.825601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:44.566 [2024-11-19 11:48:57.825608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.566 [2024-11-19 11:48:57.825626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:44.566 [2024-11-19 11:48:57.825634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825642] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.566 [2024-11-19 11:48:57.825650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.566 [2024-11-19 11:48:57.825664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.566 [2024-11-19 11:48:57.825680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.566 [2024-11-19 11:48:57.825687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.566 [2024-11-19 11:48:57.825693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.566 [2024-11-19 11:48:57.825700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.566 [2024-11-19 11:48:57.825711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.566 [2024-11-19 11:48:57.825719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.566 [2024-11-19 11:48:57.825729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.566 [2024-11-19 11:48:57.825743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:44.566 [2024-11-19 11:48:57.825762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:44.566 [2024-11-19 11:48:57.825770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:44.566 [2024-11-19 11:48:57.825778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:44.566 [2024-11-19 11:48:57.825784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:44.566 [2024-11-19 11:48:57.825791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:44.566 [2024-11-19 11:48:57.825798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:44.566 [2024-11-19 11:48:57.825812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:44.566 [2024-11-19 11:48:57.825820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:44.566 [2024-11-19 11:48:57.825828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:44.566 [2024-11-19 11:48:57.825865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.566 [2024-11-19 11:48:57.825876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.566 [2024-11-19 11:48:57.825895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.566 [2024-11-19 11:48:57.825903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.566 [2024-11-19 11:48:57.825910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.566 [2024-11-19 11:48:57.825919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-19 11:48:57.825931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.566 [2024-11-19 11:48:57.825941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:17:44.566 [2024-11-19 11:48:57.825953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-19 11:48:57.855776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-19 11:48:57.855851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.566 [2024-11-19 11:48:57.855868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.744 ms 00:17:44.566 [2024-11-19 11:48:57.855881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.566 [2024-11-19 11:48:57.856098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.566 [2024-11-19 11:48:57.856115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.566 [2024-11-19 11:48:57.856128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:44.567 [2024-11-19 11:48:57.856145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.872400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.872474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.567 [2024-11-19 11:48:57.872486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.222 ms 00:17:44.567 [2024-11-19 11:48:57.872495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.872578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.872589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.567 [2024-11-19 11:48:57.872601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.567 [2024-11-19 11:48:57.872611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.873305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.873343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.567 [2024-11-19 11:48:57.873363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:17:44.567 [2024-11-19 11:48:57.873373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.873568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.873580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.567 [2024-11-19 11:48:57.873590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:44.567 [2024-11-19 11:48:57.873603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.884029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.884069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.567 [2024-11-19 11:48:57.884082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.397 ms 00:17:44.567 [2024-11-19 11:48:57.884093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.889116] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:44.567 [2024-11-19 11:48:57.889171] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.567 [2024-11-19 11:48:57.889185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.889195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.567 [2024-11-19 11:48:57.889205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.957 ms 00:17:44.567 [2024-11-19 11:48:57.889214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.905729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.905775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.567 [2024-11-19 11:48:57.905788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.425 ms 00:17:44.567 [2024-11-19 11:48:57.905797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.909117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.909162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.567 [2024-11-19 11:48:57.909173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:17:44.567 [2024-11-19 11:48:57.909183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.912030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.912070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.567 [2024-11-19 11:48:57.912091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:17:44.567 [2024-11-19 11:48:57.912099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.912513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.912531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.567 [2024-11-19 11:48:57.912546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:44.567 [2024-11-19 11:48:57.912555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.945183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.567 [2024-11-19 11:48:57.945233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.567 [2024-11-19 11:48:57.945247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.599 ms 00:17:44.567 [2024-11-19 11:48:57.945256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.567 [2024-11-19 11:48:57.954418] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.829 [2024-11-19 11:48:57.979759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.979808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.829 [2024-11-19 11:48:57.979822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.402 ms 00:17:44.829 [2024-11-19 11:48:57.979830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.979945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.979957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.829 [2024-11-19 11:48:57.979968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:44.829 [2024-11-19 11:48:57.979985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.980061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.980073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.829 [2024-11-19 11:48:57.980083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:44.829 [2024-11-19 11:48:57.980092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.980122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.980132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.829 [2024-11-19 11:48:57.980141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.829 [2024-11-19 11:48:57.980149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.980190] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.829 [2024-11-19 11:48:57.980202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.980213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.829 [2024-11-19 11:48:57.980221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:44.829 [2024-11-19 11:48:57.980229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.987512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.987557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.829 [2024-11-19 11:48:57.987570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.228 ms 00:17:44.829 [2024-11-19 11:48:57.987579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.987691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.829 [2024-11-19 11:48:57.987706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.829 [2024-11-19 11:48:57.987718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:44.829 [2024-11-19 11:48:57.987732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.829 [2024-11-19 11:48:57.990255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.829 [2024-11-19 11:48:57.991828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 190.886 ms, result 0 00:17:44.829 [2024-11-19 11:48:57.993129] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.829 [2024-11-19 11:48:58.002773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.773  [2024-11-19T11:49:00.128Z] Copying: 13/256 [MB] (13 MBps) [2024-11-19T11:49:01.071Z] Copying: 24/256 [MB] (10 MBps) [2024-11-19T11:49:02.459Z] Copying: 37/256 [MB] (13 MBps) [2024-11-19T11:49:03.403Z] Copying: 48/256 [MB] (10 MBps) [2024-11-19T11:49:04.346Z] Copying: 58/256 [MB] (10 MBps) [2024-11-19T11:49:05.289Z] Copying: 79/256 [MB] (21 MBps) [2024-11-19T11:49:06.233Z] Copying: 94/256 [MB] (14 MBps) [2024-11-19T11:49:07.179Z] Copying: 111/256 [MB] (16 MBps) [2024-11-19T11:49:08.126Z] Copying: 133/256 [MB] (21 MBps) [2024-11-19T11:49:09.066Z] Copying: 143/256 [MB] (10 MBps) [2024-11-19T11:49:10.455Z] Copying: 163/256 [MB] (20 MBps) [2024-11-19T11:49:11.396Z] Copying: 178/256 [MB] (14 MBps) [2024-11-19T11:49:12.338Z] Copying: 194/256 [MB] (15 MBps) [2024-11-19T11:49:13.281Z] Copying: 211/256 [MB] (16 MBps) [2024-11-19T11:49:14.223Z] Copying: 225/256 [MB] (13 MBps) [2024-11-19T11:49:15.168Z] Copying: 238/256 [MB] (13 MBps) [2024-11-19T11:49:15.168Z] Copying: 254/256 [MB] (16 MBps) [2024-11-19T11:49:15.742Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-19 11:49:15.596139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:02.330 [2024-11-19 11:49:15.598483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.598537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:02.330 [2024-11-19 11:49:15.598553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.330 [2024-11-19 11:49:15.598571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.598602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:02.330 [2024-11-19 11:49:15.599301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.599339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:02.330 [2024-11-19 11:49:15.599352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:18:02.330 [2024-11-19 11:49:15.599362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.599682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.599701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:02.330 [2024-11-19 11:49:15.599713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:02.330 [2024-11-19 11:49:15.599722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.604309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.604352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:02.330 [2024-11-19 11:49:15.604363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.567 ms 00:18:02.330 [2024-11-19 11:49:15.604371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.611378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.611912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:02.330 [2024-11-19 11:49:15.611943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:18:02.330 [2024-11-19 11:49:15.611952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.615002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.615056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:02.330 [2024-11-19 11:49:15.615068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:18:02.330 [2024-11-19 11:49:15.615075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.620920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.620975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:02.330 [2024-11-19 11:49:15.620997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.793 ms 00:18:02.330 [2024-11-19 11:49:15.621006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.621152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.621164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:02.330 [2024-11-19 11:49:15.621174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:02.330 [2024-11-19 11:49:15.621183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.624702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.624754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:02.330 [2024-11-19 11:49:15.624765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:18:02.330 [2024-11-19 11:49:15.624772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.627950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.627997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:02.330 [2024-11-19 11:49:15.628007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.129 ms 00:18:02.330 [2024-11-19 11:49:15.628016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.630601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.630660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:02.330 [2024-11-19 11:49:15.630671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:18:02.330 [2024-11-19 11:49:15.630678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.633909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.330 [2024-11-19 11:49:15.633975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:02.330 [2024-11-19 11:49:15.633988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:18:02.330 [2024-11-19 11:49:15.633997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.330 [2024-11-19 11:49:15.634051] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:02.330 [2024-11-19 11:49:15.634079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.634865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:02.331 [2024-11-19 11:49:15.635229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:02.332 [2024-11-19 11:49:15.635354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:02.332 [2024-11-19 11:49:15.635365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9459b3b9-9e0b-48e6-9c91-4595cdc8fc0c 00:18:02.332 [2024-11-19 11:49:15.635376] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:02.332 [2024-11-19 11:49:15.635386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:02.332 [2024-11-19 11:49:15.635402] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:02.332 [2024-11-19 11:49:15.635437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:02.332 [2024-11-19 11:49:15.635448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:02.332 [2024-11-19 11:49:15.635458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:02.332 [2024-11-19 11:49:15.635469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:02.332 [2024-11-19 11:49:15.635477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:02.332 [2024-11-19 11:49:15.635486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:02.332 [2024-11-19 11:49:15.635496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.332 [2024-11-19 11:49:15.635507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:02.332 [2024-11-19 11:49:15.635522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:18:02.332 [2024-11-19 11:49:15.635532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.638115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.332 [2024-11-19 11:49:15.638158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:02.332 [2024-11-19 11:49:15.638174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.552 ms 00:18:02.332 [2024-11-19 11:49:15.638183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.638310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.332 [2024-11-19 11:49:15.638323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:02.332 [2024-11-19 11:49:15.638332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:02.332 [2024-11-19 11:49:15.638340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.646127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.646188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.332 [2024-11-19 11:49:15.646199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.646207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.646299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.646313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.332 [2024-11-19 11:49:15.646321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.646329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.646377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.646387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.332 [2024-11-19 11:49:15.646396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.646423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.646444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.646453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.332 [2024-11-19 11:49:15.646465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.646474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.660251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.660329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.332 [2024-11-19 11:49:15.660342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.660350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.332 [2024-11-19 11:49:15.671351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.332 [2024-11-19 11:49:15.671443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.332 [2024-11-19 11:49:15.671504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.332 [2024-11-19 11:49:15.671602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.332 [2024-11-19 11:49:15.671661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.332 [2024-11-19 11:49:15.671737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.332 [2024-11-19 11:49:15.671798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.332 [2024-11-19 11:49:15.671807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.332 [2024-11-19 11:49:15.671817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.332 [2024-11-19 11:49:15.671967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.463 ms, result 0 00:18:02.592 00:18:02.592 00:18:02.592 11:49:15 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:03.183 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:03.183 11:49:16 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:03.183 11:49:16 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:03.183 11:49:16 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:03.183 11:49:16 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:03.183 11:49:16 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:03.183 11:49:16 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:03.184 Process with pid 85702 is not found 00:18:03.184 11:49:16 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85702 00:18:03.184 11:49:16 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85702 ']' 00:18:03.184 11:49:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85702 00:18:03.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85702) - No such process 00:18:03.184 11:49:16 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85702 is not found' 00:18:03.184 00:18:03.184 real 1m18.786s 00:18:03.184 user 1m41.460s 00:18:03.184 sys 0m5.945s 00:18:03.184 11:49:16 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.184 ************************************ 00:18:03.184 END TEST ftl_trim 00:18:03.184 ************************************ 00:18:03.184 11:49:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:03.184 11:49:16 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:03.184 11:49:16 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:03.184 11:49:16 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.184 11:49:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:03.184 ************************************ 00:18:03.184 START TEST ftl_restore 00:18:03.184 ************************************ 00:18:03.184 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:03.515 * Looking for test storage... 00:18:03.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:03.515 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.516 11:49:16 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.516 --rc genhtml_branch_coverage=1 00:18:03.516 --rc genhtml_function_coverage=1 00:18:03.516 --rc genhtml_legend=1 00:18:03.516 --rc geninfo_all_blocks=1 00:18:03.516 --rc geninfo_unexecuted_blocks=1 00:18:03.516 00:18:03.516 ' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.516 --rc genhtml_branch_coverage=1 00:18:03.516 --rc genhtml_function_coverage=1 00:18:03.516 --rc genhtml_legend=1 00:18:03.516 --rc geninfo_all_blocks=1 00:18:03.516 --rc geninfo_unexecuted_blocks=1 00:18:03.516 00:18:03.516 ' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.516 --rc genhtml_branch_coverage=1 00:18:03.516 --rc genhtml_function_coverage=1 00:18:03.516 --rc genhtml_legend=1 00:18:03.516 --rc geninfo_all_blocks=1 00:18:03.516 --rc geninfo_unexecuted_blocks=1 00:18:03.516 00:18:03.516 ' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.516 --rc genhtml_branch_coverage=1 00:18:03.516 --rc genhtml_function_coverage=1 00:18:03.516 --rc genhtml_legend=1 00:18:03.516 --rc geninfo_all_blocks=1 00:18:03.516 --rc geninfo_unexecuted_blocks=1 00:18:03.516 00:18:03.516 ' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:03.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.tSfkM59jhv 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86010 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86010 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86010 ']' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.516 11:49:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:03.516 11:49:16 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:03.516 [2024-11-19 11:49:16.776428] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:03.516 [2024-11-19 11:49:16.776546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86010 ] 00:18:03.782 [2024-11-19 11:49:16.911709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.782 [2024-11-19 11:49:16.946228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.354 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.354 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:04.354 11:49:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:04.354 11:49:17 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:04.354 11:49:17 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:04.354 11:49:17 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:04.354 11:49:17 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:04.354 11:49:17 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:04.615 11:49:17 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:04.615 11:49:17 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:04.615 11:49:17 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:04.615 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:04.615 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:04.615 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:04.615 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:04.615 11:49:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:04.876 { 00:18:04.876 "name": "nvme0n1", 00:18:04.876 "aliases": [ 00:18:04.876 "9fea9648-56d2-469b-b9cd-46bb6055d8a9" 00:18:04.876 ], 00:18:04.876 "product_name": "NVMe disk", 00:18:04.876 "block_size": 4096, 00:18:04.876 "num_blocks": 1310720, 00:18:04.876 "uuid": "9fea9648-56d2-469b-b9cd-46bb6055d8a9", 00:18:04.876 "numa_id": -1, 00:18:04.876 "assigned_rate_limits": { 00:18:04.876 "rw_ios_per_sec": 0, 00:18:04.876 "rw_mbytes_per_sec": 0, 00:18:04.876 "r_mbytes_per_sec": 0, 00:18:04.876 "w_mbytes_per_sec": 0 00:18:04.876 }, 00:18:04.876 "claimed": true, 00:18:04.876 "claim_type": "read_many_write_one", 00:18:04.876 "zoned": false, 00:18:04.876 "supported_io_types": { 00:18:04.876 "read": true, 00:18:04.876 "write": true, 00:18:04.876 "unmap": true, 00:18:04.876 "flush": true, 00:18:04.876 "reset": true, 00:18:04.876 "nvme_admin": true, 00:18:04.876 "nvme_io": true, 00:18:04.876 "nvme_io_md": false, 00:18:04.876 "write_zeroes": true, 00:18:04.876 "zcopy": false, 00:18:04.876 "get_zone_info": false, 00:18:04.876 "zone_management": false, 00:18:04.876 "zone_append": false, 00:18:04.876 "compare": true, 00:18:04.876 "compare_and_write": false, 00:18:04.876 "abort": true, 00:18:04.876 "seek_hole": false, 00:18:04.876 "seek_data": false, 00:18:04.876 "copy": true, 00:18:04.876 "nvme_iov_md": false 00:18:04.876 }, 00:18:04.876 "driver_specific": { 00:18:04.876 "nvme": [ 00:18:04.876 { 00:18:04.876 "pci_address": "0000:00:11.0", 00:18:04.876 "trid": { 00:18:04.876 "trtype": "PCIe", 00:18:04.876 "traddr": "0000:00:11.0" 00:18:04.876 }, 00:18:04.876 "ctrlr_data": { 00:18:04.876 "cntlid": 0, 00:18:04.876 "vendor_id": "0x1b36", 00:18:04.876 "model_number": "QEMU NVMe Ctrl", 00:18:04.876 "serial_number": "12341", 00:18:04.876 "firmware_revision": "8.0.0", 00:18:04.876 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:04.876 "oacs": { 00:18:04.876 "security": 0, 00:18:04.876 "format": 1, 00:18:04.876 "firmware": 0, 00:18:04.876 "ns_manage": 1 00:18:04.876 }, 00:18:04.876 "multi_ctrlr": false, 00:18:04.876 "ana_reporting": false 00:18:04.876 }, 00:18:04.876 "vs": { 00:18:04.876 "nvme_version": "1.4" 00:18:04.876 }, 00:18:04.876 "ns_data": { 00:18:04.876 "id": 1, 00:18:04.876 "can_share": false 00:18:04.876 } 00:18:04.876 } 00:18:04.876 ], 00:18:04.876 "mp_policy": "active_passive" 00:18:04.876 } 00:18:04.876 } 00:18:04.876 ]' 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:04.876 11:49:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:04.876 11:49:18 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:04.876 11:49:18 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:04.876 11:49:18 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:04.876 11:49:18 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:04.876 11:49:18 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:05.138 11:49:18 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e253a074-6784-4f1f-b3d1-01e1c9a016e7 00:18:05.138 11:49:18 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:05.138 11:49:18 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e253a074-6784-4f1f-b3d1-01e1c9a016e7 00:18:05.399 11:49:18 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:05.660 11:49:18 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7c3a2b44-656c-4297-8b62-1e1b9ba9ca79 00:18:05.660 11:49:18 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7c3a2b44-656c-4297-8b62-1e1b9ba9ca79 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:05.932 11:49:19 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:05.932 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:05.932 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:05.932 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:05.932 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:05.932 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:05.933 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.933 { 00:18:05.933 "name": "7bd82757-3c47-420d-b444-ab19613ca2e4", 00:18:05.933 "aliases": [ 00:18:05.933 "lvs/nvme0n1p0" 00:18:05.933 ], 00:18:05.933 "product_name": "Logical Volume", 00:18:05.933 "block_size": 4096, 00:18:05.933 "num_blocks": 26476544, 00:18:05.933 "uuid": "7bd82757-3c47-420d-b444-ab19613ca2e4", 00:18:05.933 "assigned_rate_limits": { 00:18:05.933 "rw_ios_per_sec": 0, 00:18:05.933 "rw_mbytes_per_sec": 0, 00:18:05.933 "r_mbytes_per_sec": 0, 00:18:05.933 "w_mbytes_per_sec": 0 00:18:05.933 }, 00:18:05.933 "claimed": false, 00:18:05.933 "zoned": false, 00:18:05.933 "supported_io_types": { 00:18:05.933 "read": true, 00:18:05.933 "write": true, 00:18:05.933 "unmap": true, 00:18:05.933 "flush": false, 00:18:05.933 "reset": true, 00:18:05.933 "nvme_admin": false, 00:18:05.933 "nvme_io": false, 00:18:05.933 "nvme_io_md": false, 00:18:05.933 "write_zeroes": true, 00:18:05.933 "zcopy": false, 00:18:05.933 "get_zone_info": false, 00:18:05.933 "zone_management": false, 00:18:05.933 "zone_append": false, 00:18:05.933 "compare": false, 00:18:05.933 "compare_and_write": false, 00:18:05.933 "abort": false, 00:18:05.933 "seek_hole": true, 00:18:05.933 "seek_data": true, 00:18:05.933 "copy": false, 00:18:05.933 "nvme_iov_md": false 00:18:05.933 }, 00:18:05.933 "driver_specific": { 00:18:05.933 "lvol": { 00:18:05.933 "lvol_store_uuid": "7c3a2b44-656c-4297-8b62-1e1b9ba9ca79", 00:18:05.933 "base_bdev": "nvme0n1", 00:18:05.933 "thin_provision": true, 00:18:05.933 "num_allocated_clusters": 0, 00:18:05.934 "snapshot": false, 00:18:05.934 "clone": false, 00:18:05.934 "esnap_clone": false 00:18:05.934 } 00:18:05.934 } 00:18:05.934 } 00:18:05.934 ]' 00:18:05.934 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.195 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.195 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:06.195 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:06.195 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:06.195 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:06.195 11:49:19 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:06.195 11:49:19 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:06.195 11:49:19 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:06.456 11:49:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:06.456 11:49:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:06.456 11:49:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:06.456 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:06.456 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:06.456 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:06.456 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:06.456 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:06.718 { 00:18:06.718 "name": "7bd82757-3c47-420d-b444-ab19613ca2e4", 00:18:06.718 "aliases": [ 00:18:06.718 "lvs/nvme0n1p0" 00:18:06.718 ], 00:18:06.718 "product_name": "Logical Volume", 00:18:06.718 "block_size": 4096, 00:18:06.718 "num_blocks": 26476544, 00:18:06.718 "uuid": "7bd82757-3c47-420d-b444-ab19613ca2e4", 00:18:06.718 "assigned_rate_limits": { 00:18:06.718 "rw_ios_per_sec": 0, 00:18:06.718 "rw_mbytes_per_sec": 0, 00:18:06.718 "r_mbytes_per_sec": 0, 00:18:06.718 "w_mbytes_per_sec": 0 00:18:06.718 }, 00:18:06.718 "claimed": false, 00:18:06.718 "zoned": false, 00:18:06.718 "supported_io_types": { 00:18:06.718 "read": true, 00:18:06.718 "write": true, 00:18:06.718 "unmap": true, 00:18:06.718 "flush": false, 00:18:06.718 "reset": true, 00:18:06.718 "nvme_admin": false, 00:18:06.718 "nvme_io": false, 00:18:06.718 "nvme_io_md": false, 00:18:06.718 "write_zeroes": true, 00:18:06.718 "zcopy": false, 00:18:06.718 "get_zone_info": false, 00:18:06.718 "zone_management": false, 00:18:06.718 "zone_append": false, 00:18:06.718 "compare": false, 00:18:06.718 "compare_and_write": false, 00:18:06.718 "abort": false, 00:18:06.718 "seek_hole": true, 00:18:06.718 "seek_data": true, 00:18:06.718 "copy": false, 00:18:06.718 "nvme_iov_md": false 00:18:06.718 }, 00:18:06.718 "driver_specific": { 00:18:06.718 "lvol": { 00:18:06.718 "lvol_store_uuid": "7c3a2b44-656c-4297-8b62-1e1b9ba9ca79", 00:18:06.718 "base_bdev": "nvme0n1", 00:18:06.718 "thin_provision": true, 00:18:06.718 "num_allocated_clusters": 0, 00:18:06.718 "snapshot": false, 00:18:06.718 "clone": false, 00:18:06.718 "esnap_clone": false 00:18:06.718 } 00:18:06.718 } 00:18:06.718 } 00:18:06.718 ]' 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:06.718 11:49:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:06.718 11:49:19 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:06.718 11:49:19 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:06.979 11:49:20 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:06.980 11:49:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bd82757-3c47-420d-b444-ab19613ca2e4 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:06.980 { 00:18:06.980 "name": "7bd82757-3c47-420d-b444-ab19613ca2e4", 00:18:06.980 "aliases": [ 00:18:06.980 "lvs/nvme0n1p0" 00:18:06.980 ], 00:18:06.980 "product_name": "Logical Volume", 00:18:06.980 "block_size": 4096, 00:18:06.980 "num_blocks": 26476544, 00:18:06.980 "uuid": "7bd82757-3c47-420d-b444-ab19613ca2e4", 00:18:06.980 "assigned_rate_limits": { 00:18:06.980 "rw_ios_per_sec": 0, 00:18:06.980 "rw_mbytes_per_sec": 0, 00:18:06.980 "r_mbytes_per_sec": 0, 00:18:06.980 "w_mbytes_per_sec": 0 00:18:06.980 }, 00:18:06.980 "claimed": false, 00:18:06.980 "zoned": false, 00:18:06.980 "supported_io_types": { 00:18:06.980 "read": true, 00:18:06.980 "write": true, 00:18:06.980 "unmap": true, 00:18:06.980 "flush": false, 00:18:06.980 "reset": true, 00:18:06.980 "nvme_admin": false, 00:18:06.980 "nvme_io": false, 00:18:06.980 "nvme_io_md": false, 00:18:06.980 "write_zeroes": true, 00:18:06.980 "zcopy": false, 00:18:06.980 "get_zone_info": false, 00:18:06.980 "zone_management": false, 00:18:06.980 "zone_append": false, 00:18:06.980 "compare": false, 00:18:06.980 "compare_and_write": false, 00:18:06.980 "abort": false, 00:18:06.980 "seek_hole": true, 00:18:06.980 "seek_data": true, 00:18:06.980 "copy": false, 00:18:06.980 "nvme_iov_md": false 00:18:06.980 }, 00:18:06.980 "driver_specific": { 00:18:06.980 "lvol": { 00:18:06.980 "lvol_store_uuid": "7c3a2b44-656c-4297-8b62-1e1b9ba9ca79", 00:18:06.980 "base_bdev": "nvme0n1", 00:18:06.980 "thin_provision": true, 00:18:06.980 "num_allocated_clusters": 0, 00:18:06.980 "snapshot": false, 00:18:06.980 "clone": false, 00:18:06.980 "esnap_clone": false 00:18:06.980 } 00:18:06.980 } 00:18:06.980 } 00:18:06.980 ]' 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.980 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:07.242 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:07.242 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:07.242 11:49:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:07.242 11:49:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:07.242 11:49:20 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7bd82757-3c47-420d-b444-ab19613ca2e4 --l2p_dram_limit 10' 00:18:07.242 11:49:20 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:07.242 11:49:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:07.243 11:49:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:07.243 11:49:20 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:07.243 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:07.243 11:49:20 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7bd82757-3c47-420d-b444-ab19613ca2e4 --l2p_dram_limit 10 -c nvc0n1p0 00:18:07.243 [2024-11-19 11:49:20.574455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.574496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:07.243 [2024-11-19 11:49:20.574506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:07.243 [2024-11-19 11:49:20.574514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.574557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.574566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:07.243 [2024-11-19 11:49:20.574572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:07.243 [2024-11-19 11:49:20.574581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.574597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:07.243 [2024-11-19 11:49:20.575038] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:07.243 [2024-11-19 11:49:20.575071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.575080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:07.243 [2024-11-19 11:49:20.575091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:18:07.243 [2024-11-19 11:49:20.575099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.575167] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 103bcb84-4551-4691-830d-e5d80ae80c8c 00:18:07.243 [2024-11-19 11:49:20.576100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.576122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:07.243 [2024-11-19 11:49:20.576132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:07.243 [2024-11-19 11:49:20.576138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.580809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.580837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:07.243 [2024-11-19 11:49:20.580846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:18:07.243 [2024-11-19 11:49:20.580851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.580909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.580916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:07.243 [2024-11-19 11:49:20.580926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:07.243 [2024-11-19 11:49:20.580934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.580980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.580989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:07.243 [2024-11-19 11:49:20.580998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:07.243 [2024-11-19 11:49:20.581003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.581022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.243 [2024-11-19 11:49:20.582268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.582296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:07.243 [2024-11-19 11:49:20.582305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:18:07.243 [2024-11-19 11:49:20.582312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.582337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.582344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:07.243 [2024-11-19 11:49:20.582350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:07.243 [2024-11-19 11:49:20.582359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.582374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:07.243 [2024-11-19 11:49:20.582494] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:07.243 [2024-11-19 11:49:20.582504] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:07.243 [2024-11-19 11:49:20.582517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:07.243 [2024-11-19 11:49:20.582525] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582535] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582541] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:07.243 [2024-11-19 11:49:20.582553] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:07.243 [2024-11-19 11:49:20.582558] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:07.243 [2024-11-19 11:49:20.582565] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:07.243 [2024-11-19 11:49:20.582572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.582580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:07.243 [2024-11-19 11:49:20.582586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:18:07.243 [2024-11-19 11:49:20.582592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.582656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.243 [2024-11-19 11:49:20.582668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:07.243 [2024-11-19 11:49:20.582673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:07.243 [2024-11-19 11:49:20.582680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.243 [2024-11-19 11:49:20.582750] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:07.243 [2024-11-19 11:49:20.582771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:07.243 [2024-11-19 11:49:20.582778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:07.243 [2024-11-19 11:49:20.582801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:07.243 [2024-11-19 11:49:20.582818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.243 [2024-11-19 11:49:20.582830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:07.243 [2024-11-19 11:49:20.582837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:07.243 [2024-11-19 11:49:20.582842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:07.243 [2024-11-19 11:49:20.582850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:07.243 [2024-11-19 11:49:20.582855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:07.243 [2024-11-19 11:49:20.582861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:07.243 [2024-11-19 11:49:20.582873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:07.243 [2024-11-19 11:49:20.582890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:07.243 [2024-11-19 11:49:20.582907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:07.243 [2024-11-19 11:49:20.582925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:07.243 [2024-11-19 11:49:20.582948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:07.243 [2024-11-19 11:49:20.582960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:07.243 [2024-11-19 11:49:20.582966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:07.243 [2024-11-19 11:49:20.582973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.243 [2024-11-19 11:49:20.582980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:07.243 [2024-11-19 11:49:20.582987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:07.243 [2024-11-19 11:49:20.582992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:07.243 [2024-11-19 11:49:20.582999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:07.243 [2024-11-19 11:49:20.583004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:07.243 [2024-11-19 11:49:20.583012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.243 [2024-11-19 11:49:20.583017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:07.244 [2024-11-19 11:49:20.583025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:07.244 [2024-11-19 11:49:20.583030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.244 [2024-11-19 11:49:20.583037] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:07.244 [2024-11-19 11:49:20.583048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:07.244 [2024-11-19 11:49:20.583059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:07.244 [2024-11-19 11:49:20.583067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:07.244 [2024-11-19 11:49:20.583075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:07.244 [2024-11-19 11:49:20.583082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:07.244 [2024-11-19 11:49:20.583089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:07.244 [2024-11-19 11:49:20.583096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:07.244 [2024-11-19 11:49:20.583104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:07.244 [2024-11-19 11:49:20.583110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:07.244 [2024-11-19 11:49:20.583120] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:07.244 [2024-11-19 11:49:20.583128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:07.244 [2024-11-19 11:49:20.583143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:07.244 [2024-11-19 11:49:20.583150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:07.244 [2024-11-19 11:49:20.583156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:07.244 [2024-11-19 11:49:20.583164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:07.244 [2024-11-19 11:49:20.583170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:07.244 [2024-11-19 11:49:20.583178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:07.244 [2024-11-19 11:49:20.583184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:07.244 [2024-11-19 11:49:20.583192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:07.244 [2024-11-19 11:49:20.583198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:07.244 [2024-11-19 11:49:20.583233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:07.244 [2024-11-19 11:49:20.583241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:07.244 [2024-11-19 11:49:20.583255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:07.244 [2024-11-19 11:49:20.583263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:07.244 [2024-11-19 11:49:20.583269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:07.244 [2024-11-19 11:49:20.583278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.244 [2024-11-19 11:49:20.583284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:07.244 [2024-11-19 11:49:20.583294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:18:07.244 [2024-11-19 11:49:20.583300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.244 [2024-11-19 11:49:20.583329] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:07.244 [2024-11-19 11:49:20.583338] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:11.453 [2024-11-19 11:49:24.765744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.453 [2024-11-19 11:49:24.765834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:11.453 [2024-11-19 11:49:24.765861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4182.382 ms 00:18:11.453 [2024-11-19 11:49:24.765871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.453 [2024-11-19 11:49:24.780170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.453 [2024-11-19 11:49:24.780230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.453 [2024-11-19 11:49:24.780252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.171 ms 00:18:11.453 [2024-11-19 11:49:24.780262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.453 [2024-11-19 11:49:24.780398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.453 [2024-11-19 11:49:24.780443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.453 [2024-11-19 11:49:24.780463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:11.453 [2024-11-19 11:49:24.780473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.453 [2024-11-19 11:49:24.792155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.453 [2024-11-19 11:49:24.792206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.453 [2024-11-19 11:49:24.792219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.613 ms 00:18:11.453 [2024-11-19 11:49:24.792228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.453 [2024-11-19 11:49:24.792266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.453 [2024-11-19 11:49:24.792275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.453 [2024-11-19 11:49:24.792288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:11.453 [2024-11-19 11:49:24.792296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.453 [2024-11-19 11:49:24.792889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.453 [2024-11-19 11:49:24.792933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.453 [2024-11-19 11:49:24.792947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:18:11.453 [2024-11-19 11:49:24.792956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.453 [2024-11-19 11:49:24.793086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.454 [2024-11-19 11:49:24.793097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.454 [2024-11-19 11:49:24.793109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:11.454 [2024-11-19 11:49:24.793121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.454 [2024-11-19 11:49:24.813317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.454 [2024-11-19 11:49:24.813386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.454 [2024-11-19 11:49:24.813427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.161 ms 00:18:11.454 [2024-11-19 11:49:24.813440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.454 [2024-11-19 11:49:24.824849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:11.454 [2024-11-19 11:49:24.828626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.454 [2024-11-19 11:49:24.828673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.454 [2024-11-19 11:49:24.828685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.036 ms 00:18:11.454 [2024-11-19 11:49:24.828696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.715 [2024-11-19 11:49:24.921740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.715 [2024-11-19 11:49:24.921817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:11.715 [2024-11-19 11:49:24.921831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.011 ms 00:18:11.715 [2024-11-19 11:49:24.921847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.715 [2024-11-19 11:49:24.922064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.715 [2024-11-19 11:49:24.922078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.716 [2024-11-19 11:49:24.922088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:11.716 [2024-11-19 11:49:24.922105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.928361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.928432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:11.716 [2024-11-19 11:49:24.928445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.217 ms 00:18:11.716 [2024-11-19 11:49:24.928457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.933556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.933612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:11.716 [2024-11-19 11:49:24.933623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.046 ms 00:18:11.716 [2024-11-19 11:49:24.933633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.933977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.933992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.716 [2024-11-19 11:49:24.934002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:11.716 [2024-11-19 11:49:24.934014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.979616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.979685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:11.716 [2024-11-19 11:49:24.979698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.580 ms 00:18:11.716 [2024-11-19 11:49:24.979709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.986696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.986759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:11.716 [2024-11-19 11:49:24.986772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:18:11.716 [2024-11-19 11:49:24.986783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.992765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.992820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:11.716 [2024-11-19 11:49:24.992830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.933 ms 00:18:11.716 [2024-11-19 11:49:24.992840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.999060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.999121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.716 [2024-11-19 11:49:24.999132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.172 ms 00:18:11.716 [2024-11-19 11:49:24.999144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.999197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.999209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.716 [2024-11-19 11:49:24.999218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:11.716 [2024-11-19 11:49:24.999229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:24.999303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.716 [2024-11-19 11:49:24.999316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.716 [2024-11-19 11:49:24.999324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:11.716 [2024-11-19 11:49:24.999334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.716 [2024-11-19 11:49:25.001061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4426.081 ms, result 0 00:18:11.716 { 00:18:11.716 "name": "ftl0", 00:18:11.716 "uuid": "103bcb84-4551-4691-830d-e5d80ae80c8c" 00:18:11.716 } 00:18:11.716 11:49:25 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:11.716 11:49:25 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:11.976 11:49:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:11.976 11:49:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:12.239 [2024-11-19 11:49:25.442516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.442576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:12.239 [2024-11-19 11:49:25.442593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.239 [2024-11-19 11:49:25.442602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.442630] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.239 [2024-11-19 11:49:25.443442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.443493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:12.239 [2024-11-19 11:49:25.443504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:18:12.239 [2024-11-19 11:49:25.443514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.443784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.443797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:12.239 [2024-11-19 11:49:25.443811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:18:12.239 [2024-11-19 11:49:25.443822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.447081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.447121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:12.239 [2024-11-19 11:49:25.447130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:18:12.239 [2024-11-19 11:49:25.447143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.453392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.453448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:12.239 [2024-11-19 11:49:25.453460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.230 ms 00:18:12.239 [2024-11-19 11:49:25.453471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.456517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.456584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:12.239 [2024-11-19 11:49:25.456596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:18:12.239 [2024-11-19 11:49:25.456605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.463296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.463361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:12.239 [2024-11-19 11:49:25.463373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.640 ms 00:18:12.239 [2024-11-19 11:49:25.463384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.463534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.463548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:12.239 [2024-11-19 11:49:25.463558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:12.239 [2024-11-19 11:49:25.463574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.466479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.466537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:12.239 [2024-11-19 11:49:25.466549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:18:12.239 [2024-11-19 11:49:25.466560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.469777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.469840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:12.239 [2024-11-19 11:49:25.469850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:18:12.239 [2024-11-19 11:49:25.469860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.472479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.472533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:12.239 [2024-11-19 11:49:25.472543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:18:12.239 [2024-11-19 11:49:25.472553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.474995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.239 [2024-11-19 11:49:25.475053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:12.239 [2024-11-19 11:49:25.475073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:18:12.239 [2024-11-19 11:49:25.475082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.239 [2024-11-19 11:49:25.475128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:12.239 [2024-11-19 11:49:25.475146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:12.239 [2024-11-19 11:49:25.475157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:12.239 [2024-11-19 11:49:25.475168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:12.239 [2024-11-19 11:49:25.475176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:12.239 [2024-11-19 11:49:25.475190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.475993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.476002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:12.240 [2024-11-19 11:49:25.476011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:12.241 [2024-11-19 11:49:25.476090] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:12.241 [2024-11-19 11:49:25.476103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 103bcb84-4551-4691-830d-e5d80ae80c8c 00:18:12.241 [2024-11-19 11:49:25.476114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:12.241 [2024-11-19 11:49:25.476123] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:12.241 [2024-11-19 11:49:25.476137] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:12.241 [2024-11-19 11:49:25.476145] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:12.241 [2024-11-19 11:49:25.476155] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:12.241 [2024-11-19 11:49:25.476163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:12.241 [2024-11-19 11:49:25.476174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:12.241 [2024-11-19 11:49:25.476181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:12.241 [2024-11-19 11:49:25.476189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:12.241 [2024-11-19 11:49:25.476196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.241 [2024-11-19 11:49:25.476206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:12.241 [2024-11-19 11:49:25.476218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:18:12.241 [2024-11-19 11:49:25.476229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.478558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.241 [2024-11-19 11:49:25.478603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:12.241 [2024-11-19 11:49:25.478613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:18:12.241 [2024-11-19 11:49:25.478624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.478744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.241 [2024-11-19 11:49:25.478760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:12.241 [2024-11-19 11:49:25.478769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:12.241 [2024-11-19 11:49:25.478781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.486872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.486926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.241 [2024-11-19 11:49:25.486944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.486955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.487026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.487037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.241 [2024-11-19 11:49:25.487046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.487056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.487147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.487164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.241 [2024-11-19 11:49:25.487173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.487182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.487202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.487215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.241 [2024-11-19 11:49:25.487222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.487232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.500392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.500458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.241 [2024-11-19 11:49:25.500473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.500483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.241 [2024-11-19 11:49:25.511212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.241 [2024-11-19 11:49:25.511327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.241 [2024-11-19 11:49:25.511430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.241 [2024-11-19 11:49:25.511537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:12.241 [2024-11-19 11:49:25.511600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.241 [2024-11-19 11:49:25.511674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.241 [2024-11-19 11:49:25.511759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.241 [2024-11-19 11:49:25.511768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.241 [2024-11-19 11:49:25.511785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.241 [2024-11-19 11:49:25.511929] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.379 ms, result 0 00:18:12.241 true 00:18:12.241 11:49:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86010 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86010 ']' 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86010 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86010 00:18:12.241 killing process with pid 86010 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86010' 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86010 00:18:12.241 11:49:25 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86010 00:18:17.531 11:49:30 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:21.744 262144+0 records in 00:18:21.744 262144+0 records out 00:18:21.744 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.29666 s, 250 MB/s 00:18:21.744 11:49:34 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:23.662 11:49:36 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.662 [2024-11-19 11:49:36.732293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:23.662 [2024-11-19 11:49:36.732429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86235 ] 00:18:23.662 [2024-11-19 11:49:36.865351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.662 [2024-11-19 11:49:36.898388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.662 [2024-11-19 11:49:36.979154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.662 [2024-11-19 11:49:36.979210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.926 [2024-11-19 11:49:37.124904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.124940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:23.926 [2024-11-19 11:49:37.124952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:23.926 [2024-11-19 11:49:37.124958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.124989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.124998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.926 [2024-11-19 11:49:37.125007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:23.926 [2024-11-19 11:49:37.125017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.125029] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:23.926 [2024-11-19 11:49:37.125199] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:23.926 [2024-11-19 11:49:37.125210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.125216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.926 [2024-11-19 11:49:37.125223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:18:23.926 [2024-11-19 11:49:37.125229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.126111] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:23.926 [2024-11-19 11:49:37.128070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.128097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:23.926 [2024-11-19 11:49:37.128105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:18:23.926 [2024-11-19 11:49:37.128111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.128152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.128161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:23.926 [2024-11-19 11:49:37.128167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:23.926 [2024-11-19 11:49:37.128172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.132398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.132430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.926 [2024-11-19 11:49:37.132437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:18:23.926 [2024-11-19 11:49:37.132452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.132509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.132517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.926 [2024-11-19 11:49:37.132523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:23.926 [2024-11-19 11:49:37.132528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.132565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.132574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:23.926 [2024-11-19 11:49:37.132581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:23.926 [2024-11-19 11:49:37.132586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.132601] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.926 [2024-11-19 11:49:37.133717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.133740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.926 [2024-11-19 11:49:37.133747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:18:23.926 [2024-11-19 11:49:37.133753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.133777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.133784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:23.926 [2024-11-19 11:49:37.133790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:23.926 [2024-11-19 11:49:37.133801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.133816] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:23.926 [2024-11-19 11:49:37.133830] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:23.926 [2024-11-19 11:49:37.133858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:23.926 [2024-11-19 11:49:37.133871] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:23.926 [2024-11-19 11:49:37.133951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:23.926 [2024-11-19 11:49:37.133961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:23.926 [2024-11-19 11:49:37.133970] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:23.926 [2024-11-19 11:49:37.133980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:23.926 [2024-11-19 11:49:37.133987] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:23.926 [2024-11-19 11:49:37.133993] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:23.926 [2024-11-19 11:49:37.133999] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:23.926 [2024-11-19 11:49:37.134004] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:23.926 [2024-11-19 11:49:37.134009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:23.926 [2024-11-19 11:49:37.134018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.134023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:23.926 [2024-11-19 11:49:37.134032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:18:23.926 [2024-11-19 11:49:37.134038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.134101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.926 [2024-11-19 11:49:37.134118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:23.926 [2024-11-19 11:49:37.134127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:23.926 [2024-11-19 11:49:37.134136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.926 [2024-11-19 11:49:37.134209] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:23.926 [2024-11-19 11:49:37.134217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:23.926 [2024-11-19 11:49:37.134223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.926 [2024-11-19 11:49:37.134228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.926 [2024-11-19 11:49:37.134234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:23.926 [2024-11-19 11:49:37.134239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:23.926 [2024-11-19 11:49:37.134244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:23.926 [2024-11-19 11:49:37.134249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:23.926 [2024-11-19 11:49:37.134254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:23.926 [2024-11-19 11:49:37.134260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.926 [2024-11-19 11:49:37.134265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:23.926 [2024-11-19 11:49:37.134271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:23.926 [2024-11-19 11:49:37.134277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.926 [2024-11-19 11:49:37.134283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:23.926 [2024-11-19 11:49:37.134289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:23.926 [2024-11-19 11:49:37.134294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.926 [2024-11-19 11:49:37.134300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:23.926 [2024-11-19 11:49:37.134305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:23.926 [2024-11-19 11:49:37.134309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.926 [2024-11-19 11:49:37.134315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:23.926 [2024-11-19 11:49:37.134319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:23.926 [2024-11-19 11:49:37.134324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.927 [2024-11-19 11:49:37.134329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:23.927 [2024-11-19 11:49:37.134334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.927 [2024-11-19 11:49:37.134344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:23.927 [2024-11-19 11:49:37.134349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.927 [2024-11-19 11:49:37.134359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:23.927 [2024-11-19 11:49:37.134366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.927 [2024-11-19 11:49:37.134377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:23.927 [2024-11-19 11:49:37.134384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.927 [2024-11-19 11:49:37.134396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:23.927 [2024-11-19 11:49:37.134401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:23.927 [2024-11-19 11:49:37.134418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.927 [2024-11-19 11:49:37.134424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:23.927 [2024-11-19 11:49:37.134430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:23.927 [2024-11-19 11:49:37.134435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:23.927 [2024-11-19 11:49:37.134447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:23.927 [2024-11-19 11:49:37.134453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134460] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:23.927 [2024-11-19 11:49:37.134466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:23.927 [2024-11-19 11:49:37.134476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.927 [2024-11-19 11:49:37.134482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.927 [2024-11-19 11:49:37.134489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:23.927 [2024-11-19 11:49:37.134495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:23.927 [2024-11-19 11:49:37.134502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:23.927 [2024-11-19 11:49:37.134508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:23.927 [2024-11-19 11:49:37.134514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:23.927 [2024-11-19 11:49:37.134520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:23.927 [2024-11-19 11:49:37.134527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:23.927 [2024-11-19 11:49:37.134534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:23.927 [2024-11-19 11:49:37.134548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:23.927 [2024-11-19 11:49:37.134555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:23.927 [2024-11-19 11:49:37.134561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:23.927 [2024-11-19 11:49:37.134567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:23.927 [2024-11-19 11:49:37.134573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:23.927 [2024-11-19 11:49:37.134580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:23.927 [2024-11-19 11:49:37.134586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:23.927 [2024-11-19 11:49:37.134593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:23.927 [2024-11-19 11:49:37.134603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:23.927 [2024-11-19 11:49:37.134634] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:23.927 [2024-11-19 11:49:37.134641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:23.927 [2024-11-19 11:49:37.134654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:23.927 [2024-11-19 11:49:37.134660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:23.927 [2024-11-19 11:49:37.134666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:23.927 [2024-11-19 11:49:37.134672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.134679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:23.927 [2024-11-19 11:49:37.134687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:18:23.927 [2024-11-19 11:49:37.134693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.154569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.154654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.927 [2024-11-19 11:49:37.154684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.836 ms 00:18:23.927 [2024-11-19 11:49:37.154712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.154940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.154980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:23.927 [2024-11-19 11:49:37.155003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:18:23.927 [2024-11-19 11:49:37.155030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.166336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.166364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.927 [2024-11-19 11:49:37.166372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.167 ms 00:18:23.927 [2024-11-19 11:49:37.166377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.166403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.166419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.927 [2024-11-19 11:49:37.166425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:23.927 [2024-11-19 11:49:37.166431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.166725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.166747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.927 [2024-11-19 11:49:37.166754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:18:23.927 [2024-11-19 11:49:37.166760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.166853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.166865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.927 [2024-11-19 11:49:37.166873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:23.927 [2024-11-19 11:49:37.166879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.170715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.170743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.927 [2024-11-19 11:49:37.170750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.821 ms 00:18:23.927 [2024-11-19 11:49:37.170759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.172784] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:23.927 [2024-11-19 11:49:37.172813] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:23.927 [2024-11-19 11:49:37.172822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.172830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:23.927 [2024-11-19 11:49:37.172836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:18:23.927 [2024-11-19 11:49:37.172842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.183857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.183884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:23.927 [2024-11-19 11:49:37.183898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.987 ms 00:18:23.927 [2024-11-19 11:49:37.183904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.185322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.927 [2024-11-19 11:49:37.185349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:23.927 [2024-11-19 11:49:37.185355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:18:23.927 [2024-11-19 11:49:37.185361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.927 [2024-11-19 11:49:37.186544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.186569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:23.928 [2024-11-19 11:49:37.186575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:18:23.928 [2024-11-19 11:49:37.186580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.186880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.186900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:23.928 [2024-11-19 11:49:37.186910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:23.928 [2024-11-19 11:49:37.186919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.200341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.200374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:23.928 [2024-11-19 11:49:37.200384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:18:23.928 [2024-11-19 11:49:37.200391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.206054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:23.928 [2024-11-19 11:49:37.207791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.207814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:23.928 [2024-11-19 11:49:37.207821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.363 ms 00:18:23.928 [2024-11-19 11:49:37.207831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.207867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.207875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:23.928 [2024-11-19 11:49:37.207881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:23.928 [2024-11-19 11:49:37.207888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.207937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.207944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:23.928 [2024-11-19 11:49:37.207951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:23.928 [2024-11-19 11:49:37.207956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.207977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.207988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:23.928 [2024-11-19 11:49:37.207994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:23.928 [2024-11-19 11:49:37.208000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.208024] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:23.928 [2024-11-19 11:49:37.208031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.208037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:23.928 [2024-11-19 11:49:37.208043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:23.928 [2024-11-19 11:49:37.208048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.210800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.210831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:23.928 [2024-11-19 11:49:37.210838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:18:23.928 [2024-11-19 11:49:37.210844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.210899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.928 [2024-11-19 11:49:37.210907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:23.928 [2024-11-19 11:49:37.210914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:23.928 [2024-11-19 11:49:37.210919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.928 [2024-11-19 11:49:37.211663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.445 ms, result 0 00:18:24.873  [2024-11-19T11:49:39.228Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-19T11:49:40.613Z] Copying: 52/1024 [MB] (21 MBps) [2024-11-19T11:49:41.556Z] Copying: 84/1024 [MB] (32 MBps) [2024-11-19T11:49:42.501Z] Copying: 114/1024 [MB] (29 MBps) [2024-11-19T11:49:43.446Z] Copying: 127/1024 [MB] (13 MBps) [2024-11-19T11:49:44.390Z] Copying: 149/1024 [MB] (22 MBps) [2024-11-19T11:49:45.337Z] Copying: 182/1024 [MB] (33 MBps) [2024-11-19T11:49:46.280Z] Copying: 196/1024 [MB] (13 MBps) [2024-11-19T11:49:47.223Z] Copying: 216/1024 [MB] (19 MBps) [2024-11-19T11:49:48.611Z] Copying: 229/1024 [MB] (13 MBps) [2024-11-19T11:49:49.555Z] Copying: 245/1024 [MB] (15 MBps) [2024-11-19T11:49:50.497Z] Copying: 260/1024 [MB] (15 MBps) [2024-11-19T11:49:51.443Z] Copying: 275/1024 [MB] (15 MBps) [2024-11-19T11:49:52.387Z] Copying: 292/1024 [MB] (17 MBps) [2024-11-19T11:49:53.355Z] Copying: 307/1024 [MB] (15 MBps) [2024-11-19T11:49:54.325Z] Copying: 324/1024 [MB] (16 MBps) [2024-11-19T11:49:55.270Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-19T11:49:56.660Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-19T11:49:57.227Z] Copying: 355/1024 [MB] (10 MBps) [2024-11-19T11:49:58.602Z] Copying: 369/1024 [MB] (13 MBps) [2024-11-19T11:49:59.544Z] Copying: 381/1024 [MB] (12 MBps) [2024-11-19T11:50:00.488Z] Copying: 401/1024 [MB] (19 MBps) [2024-11-19T11:50:01.432Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-19T11:50:02.375Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-19T11:50:03.320Z] Copying: 450/1024 [MB] (27 MBps) [2024-11-19T11:50:04.264Z] Copying: 493/1024 [MB] (42 MBps) [2024-11-19T11:50:05.651Z] Copying: 504/1024 [MB] (11 MBps) [2024-11-19T11:50:06.223Z] Copying: 519/1024 [MB] (15 MBps) [2024-11-19T11:50:07.610Z] Copying: 541/1024 [MB] (21 MBps) [2024-11-19T11:50:08.554Z] Copying: 560/1024 [MB] (19 MBps) [2024-11-19T11:50:09.498Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-19T11:50:10.443Z] Copying: 599/1024 [MB] (21 MBps) [2024-11-19T11:50:11.386Z] Copying: 615/1024 [MB] (16 MBps) [2024-11-19T11:50:12.330Z] Copying: 630/1024 [MB] (14 MBps) [2024-11-19T11:50:13.273Z] Copying: 647/1024 [MB] (17 MBps) [2024-11-19T11:50:14.663Z] Copying: 665/1024 [MB] (17 MBps) [2024-11-19T11:50:15.236Z] Copying: 685/1024 [MB] (19 MBps) [2024-11-19T11:50:16.625Z] Copying: 699/1024 [MB] (14 MBps) [2024-11-19T11:50:17.569Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-19T11:50:18.505Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-19T11:50:19.440Z] Copying: 733/1024 [MB] (11 MBps) [2024-11-19T11:50:20.376Z] Copying: 746/1024 [MB] (13 MBps) [2024-11-19T11:50:21.315Z] Copying: 760/1024 [MB] (13 MBps) [2024-11-19T11:50:22.258Z] Copying: 772/1024 [MB] (11 MBps) [2024-11-19T11:50:23.642Z] Copying: 783/1024 [MB] (10 MBps) [2024-11-19T11:50:24.579Z] Copying: 794/1024 [MB] (11 MBps) [2024-11-19T11:50:25.568Z] Copying: 805/1024 [MB] (11 MBps) [2024-11-19T11:50:26.532Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-19T11:50:27.465Z] Copying: 826/1024 [MB] (10 MBps) [2024-11-19T11:50:28.400Z] Copying: 839/1024 [MB] (12 MBps) [2024-11-19T11:50:29.336Z] Copying: 852/1024 [MB] (13 MBps) [2024-11-19T11:50:30.278Z] Copying: 864/1024 [MB] (12 MBps) [2024-11-19T11:50:31.660Z] Copying: 875/1024 [MB] (11 MBps) [2024-11-19T11:50:32.230Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-19T11:50:33.615Z] Copying: 905/1024 [MB] (18 MBps) [2024-11-19T11:50:34.559Z] Copying: 925/1024 [MB] (19 MBps) [2024-11-19T11:50:35.501Z] Copying: 940/1024 [MB] (15 MBps) [2024-11-19T11:50:36.444Z] Copying: 959/1024 [MB] (18 MBps) [2024-11-19T11:50:37.387Z] Copying: 975/1024 [MB] (16 MBps) [2024-11-19T11:50:38.329Z] Copying: 990/1024 [MB] (14 MBps) [2024-11-19T11:50:39.272Z] Copying: 1005/1024 [MB] (14 MBps) [2024-11-19T11:50:40.218Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-19T11:50:40.218Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 11:50:40.061004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.061258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:26.806 [2024-11-19 11:50:40.061508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.806 [2024-11-19 11:50:40.061559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.061642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:26.806 [2024-11-19 11:50:40.062699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.062873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:26.806 [2024-11-19 11:50:40.062942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:19:26.806 [2024-11-19 11:50:40.062969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.066091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.066256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:26.806 [2024-11-19 11:50:40.066507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:19:26.806 [2024-11-19 11:50:40.066553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.087006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.087206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:26.806 [2024-11-19 11:50:40.087285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.397 ms 00:19:26.806 [2024-11-19 11:50:40.087309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.093715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.093910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:26.806 [2024-11-19 11:50:40.093984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.355 ms 00:19:26.806 [2024-11-19 11:50:40.094010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.097210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.097382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:26.806 [2024-11-19 11:50:40.097463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:19:26.806 [2024-11-19 11:50:40.097488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.103862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.104301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:26.806 [2024-11-19 11:50:40.104552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.311 ms 00:19:26.806 [2024-11-19 11:50:40.104633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.105480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.105836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:26.806 [2024-11-19 11:50:40.106013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:19:26.806 [2024-11-19 11:50:40.106228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.110319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.110495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:26.806 [2024-11-19 11:50:40.110554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.977 ms 00:19:26.806 [2024-11-19 11:50:40.110577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.113641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.113785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:26.806 [2024-11-19 11:50:40.113839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:19:26.806 [2024-11-19 11:50:40.113861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.116370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.116559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:26.806 [2024-11-19 11:50:40.116577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:19:26.806 [2024-11-19 11:50:40.116586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.118787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.806 [2024-11-19 11:50:40.118836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:26.806 [2024-11-19 11:50:40.118846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:19:26.806 [2024-11-19 11:50:40.118853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.806 [2024-11-19 11:50:40.118917] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:26.806 [2024-11-19 11:50:40.118935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.118955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.118964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.118973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.118981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.118989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.118998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:26.806 [2024-11-19 11:50:40.119087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:26.807 [2024-11-19 11:50:40.119723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:26.808 [2024-11-19 11:50:40.119797] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:26.808 [2024-11-19 11:50:40.119807] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 103bcb84-4551-4691-830d-e5d80ae80c8c 00:19:26.808 [2024-11-19 11:50:40.119824] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:26.808 [2024-11-19 11:50:40.119832] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:26.808 [2024-11-19 11:50:40.119840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:26.808 [2024-11-19 11:50:40.119849] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:26.808 [2024-11-19 11:50:40.119857] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:26.808 [2024-11-19 11:50:40.119866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:26.808 [2024-11-19 11:50:40.119878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:26.808 [2024-11-19 11:50:40.119885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:26.808 [2024-11-19 11:50:40.119892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:26.808 [2024-11-19 11:50:40.119900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.808 [2024-11-19 11:50:40.119909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:26.808 [2024-11-19 11:50:40.119923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:19:26.808 [2024-11-19 11:50:40.119941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.122237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.808 [2024-11-19 11:50:40.122277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:26.808 [2024-11-19 11:50:40.122289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:19:26.808 [2024-11-19 11:50:40.122298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.122453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.808 [2024-11-19 11:50:40.122469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:26.808 [2024-11-19 11:50:40.122478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:26.808 [2024-11-19 11:50:40.122485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.129456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.129503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.808 [2024-11-19 11:50:40.129515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.129522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.129594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.129606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.808 [2024-11-19 11:50:40.129614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.129623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.129686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.129697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.808 [2024-11-19 11:50:40.129705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.129713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.129730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.129739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.808 [2024-11-19 11:50:40.129749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.129757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.143460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.143511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.808 [2024-11-19 11:50:40.143522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.143530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.808 [2024-11-19 11:50:40.154116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:26.808 [2024-11-19 11:50:40.154194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:26.808 [2024-11-19 11:50:40.154259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:26.808 [2024-11-19 11:50:40.154359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:26.808 [2024-11-19 11:50:40.154438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:26.808 [2024-11-19 11:50:40.154524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:26.808 [2024-11-19 11:50:40.154598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:26.808 [2024-11-19 11:50:40.154607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:26.808 [2024-11-19 11:50:40.154614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.808 [2024-11-19 11:50:40.154758] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 93.720 ms, result 0 00:19:27.379 00:19:27.379 00:19:27.379 11:50:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:27.379 [2024-11-19 11:50:40.670098] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:27.379 [2024-11-19 11:50:40.670244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86900 ] 00:19:27.641 [2024-11-19 11:50:40.808594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.641 [2024-11-19 11:50:40.861753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.641 [2024-11-19 11:50:40.976771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.641 [2024-11-19 11:50:40.976855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.903 [2024-11-19 11:50:41.137113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.903 [2024-11-19 11:50:41.137173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.903 [2024-11-19 11:50:41.137191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:27.903 [2024-11-19 11:50:41.137200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.903 [2024-11-19 11:50:41.137260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.903 [2024-11-19 11:50:41.137276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.903 [2024-11-19 11:50:41.137285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:27.903 [2024-11-19 11:50:41.137302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.903 [2024-11-19 11:50:41.137323] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.904 [2024-11-19 11:50:41.137610] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.904 [2024-11-19 11:50:41.137630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.137638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.904 [2024-11-19 11:50:41.137650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:19:27.904 [2024-11-19 11:50:41.137659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.139709] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:27.904 [2024-11-19 11:50:41.143423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.143473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:27.904 [2024-11-19 11:50:41.143494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:19:27.904 [2024-11-19 11:50:41.143506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.143582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.143595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:27.904 [2024-11-19 11:50:41.143607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:27.904 [2024-11-19 11:50:41.143615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.151639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.151682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.904 [2024-11-19 11:50:41.151693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.979 ms 00:19:27.904 [2024-11-19 11:50:41.151701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.151809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.151820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.904 [2024-11-19 11:50:41.151829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:27.904 [2024-11-19 11:50:41.151837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.151897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.151914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.904 [2024-11-19 11:50:41.151922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:27.904 [2024-11-19 11:50:41.151933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.151957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.904 [2024-11-19 11:50:41.154036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.154070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.904 [2024-11-19 11:50:41.154087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:19:27.904 [2024-11-19 11:50:41.154094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.154131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.154140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.904 [2024-11-19 11:50:41.154149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:27.904 [2024-11-19 11:50:41.154156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.154177] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:27.904 [2024-11-19 11:50:41.154204] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:27.904 [2024-11-19 11:50:41.154249] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:27.904 [2024-11-19 11:50:41.154266] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:27.904 [2024-11-19 11:50:41.154371] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.904 [2024-11-19 11:50:41.154382] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.904 [2024-11-19 11:50:41.154394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:27.904 [2024-11-19 11:50:41.154404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154432] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154441] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:27.904 [2024-11-19 11:50:41.154449] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.904 [2024-11-19 11:50:41.154458] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.904 [2024-11-19 11:50:41.154469] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.904 [2024-11-19 11:50:41.154477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.154485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.904 [2024-11-19 11:50:41.154493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:27.904 [2024-11-19 11:50:41.154500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.154585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.904 [2024-11-19 11:50:41.154606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.904 [2024-11-19 11:50:41.154620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:27.904 [2024-11-19 11:50:41.154628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.904 [2024-11-19 11:50:41.154725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.904 [2024-11-19 11:50:41.154743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.904 [2024-11-19 11:50:41.154752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.904 [2024-11-19 11:50:41.154778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.904 [2024-11-19 11:50:41.154802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.904 [2024-11-19 11:50:41.154817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.904 [2024-11-19 11:50:41.154824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:27.904 [2024-11-19 11:50:41.154835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.904 [2024-11-19 11:50:41.154844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.904 [2024-11-19 11:50:41.154851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:27.904 [2024-11-19 11:50:41.154861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.904 [2024-11-19 11:50:41.154877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.904 [2024-11-19 11:50:41.154901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.904 [2024-11-19 11:50:41.154923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.904 [2024-11-19 11:50:41.154946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.904 [2024-11-19 11:50:41.154975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:27.904 [2024-11-19 11:50:41.154982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.904 [2024-11-19 11:50:41.154990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.904 [2024-11-19 11:50:41.154997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:27.904 [2024-11-19 11:50:41.155005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.905 [2024-11-19 11:50:41.155012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.905 [2024-11-19 11:50:41.155020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:27.905 [2024-11-19 11:50:41.155027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.905 [2024-11-19 11:50:41.155034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.905 [2024-11-19 11:50:41.155042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:27.905 [2024-11-19 11:50:41.155049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.905 [2024-11-19 11:50:41.155057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.905 [2024-11-19 11:50:41.155065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:27.905 [2024-11-19 11:50:41.155072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.905 [2024-11-19 11:50:41.155079] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.905 [2024-11-19 11:50:41.155090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.905 [2024-11-19 11:50:41.155098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.905 [2024-11-19 11:50:41.155108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.905 [2024-11-19 11:50:41.155118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.905 [2024-11-19 11:50:41.155127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.905 [2024-11-19 11:50:41.155135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.905 [2024-11-19 11:50:41.155143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.905 [2024-11-19 11:50:41.155150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.905 [2024-11-19 11:50:41.155158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.905 [2024-11-19 11:50:41.155168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.905 [2024-11-19 11:50:41.155179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:27.905 [2024-11-19 11:50:41.155196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:27.905 [2024-11-19 11:50:41.155205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:27.905 [2024-11-19 11:50:41.155213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:27.905 [2024-11-19 11:50:41.155221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:27.905 [2024-11-19 11:50:41.155230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:27.905 [2024-11-19 11:50:41.155238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:27.905 [2024-11-19 11:50:41.155245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:27.905 [2024-11-19 11:50:41.155253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:27.905 [2024-11-19 11:50:41.155267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:27.905 [2024-11-19 11:50:41.155304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.905 [2024-11-19 11:50:41.155313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.905 [2024-11-19 11:50:41.155330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.905 [2024-11-19 11:50:41.155337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.905 [2024-11-19 11:50:41.155344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.905 [2024-11-19 11:50:41.155351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.155362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.905 [2024-11-19 11:50:41.155369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:19:27.905 [2024-11-19 11:50:41.155377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.177680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.177748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.905 [2024-11-19 11:50:41.177770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.206 ms 00:19:27.905 [2024-11-19 11:50:41.177782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.177900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.177914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.905 [2024-11-19 11:50:41.177933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:27.905 [2024-11-19 11:50:41.177942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.189775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.189820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.905 [2024-11-19 11:50:41.189831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.758 ms 00:19:27.905 [2024-11-19 11:50:41.189840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.189874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.189883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.905 [2024-11-19 11:50:41.189891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.905 [2024-11-19 11:50:41.189899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.190494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.190536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.905 [2024-11-19 11:50:41.190551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:27.905 [2024-11-19 11:50:41.190560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.190712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.190723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.905 [2024-11-19 11:50:41.190733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:27.905 [2024-11-19 11:50:41.190745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.197649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.197699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.905 [2024-11-19 11:50:41.197712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:19:27.905 [2024-11-19 11:50:41.197720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.201590] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:27.905 [2024-11-19 11:50:41.201643] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:27.905 [2024-11-19 11:50:41.201655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.201664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:27.905 [2024-11-19 11:50:41.201673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:19:27.905 [2024-11-19 11:50:41.201680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.217593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.217651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:27.905 [2024-11-19 11:50:41.217665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.861 ms 00:19:27.905 [2024-11-19 11:50:41.217673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.905 [2024-11-19 11:50:41.220598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.905 [2024-11-19 11:50:41.220638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:27.906 [2024-11-19 11:50:41.220648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.872 ms 00:19:27.906 [2024-11-19 11:50:41.220655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.223254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.223298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:27.906 [2024-11-19 11:50:41.223307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:19:27.906 [2024-11-19 11:50:41.223314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.223676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.223705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.906 [2024-11-19 11:50:41.223715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:27.906 [2024-11-19 11:50:41.223723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.247341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.247420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:27.906 [2024-11-19 11:50:41.247434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.600 ms 00:19:27.906 [2024-11-19 11:50:41.247443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.255562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:27.906 [2024-11-19 11:50:41.258601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.258646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.906 [2024-11-19 11:50:41.258665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.102 ms 00:19:27.906 [2024-11-19 11:50:41.258673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.258750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.258761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:27.906 [2024-11-19 11:50:41.258774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:27.906 [2024-11-19 11:50:41.258782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.258854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.258864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.906 [2024-11-19 11:50:41.258873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:27.906 [2024-11-19 11:50:41.258883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.258914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.258923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.906 [2024-11-19 11:50:41.258931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:27.906 [2024-11-19 11:50:41.258942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.258980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:27.906 [2024-11-19 11:50:41.258994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.259002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:27.906 [2024-11-19 11:50:41.259010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:27.906 [2024-11-19 11:50:41.259017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.264517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.264683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.906 [2024-11-19 11:50:41.264694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.479 ms 00:19:27.906 [2024-11-19 11:50:41.264703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.264791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.906 [2024-11-19 11:50:41.264802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.906 [2024-11-19 11:50:41.264811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:27.906 [2024-11-19 11:50:41.264819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.906 [2024-11-19 11:50:41.266266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.673 ms, result 0 00:19:29.292  [2024-11-19T11:50:43.648Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-19T11:50:44.592Z] Copying: 36/1024 [MB] (16 MBps) [2024-11-19T11:50:45.535Z] Copying: 50/1024 [MB] (13 MBps) [2024-11-19T11:50:46.476Z] Copying: 61/1024 [MB] (11 MBps) [2024-11-19T11:50:47.863Z] Copying: 72/1024 [MB] (10 MBps) [2024-11-19T11:50:48.805Z] Copying: 82/1024 [MB] (10 MBps) [2024-11-19T11:50:49.748Z] Copying: 93/1024 [MB] (10 MBps) [2024-11-19T11:50:50.692Z] Copying: 103/1024 [MB] (10 MBps) [2024-11-19T11:50:51.637Z] Copying: 114/1024 [MB] (10 MBps) [2024-11-19T11:50:52.581Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-19T11:50:53.524Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-19T11:50:54.469Z] Copying: 146/1024 [MB] (10 MBps) [2024-11-19T11:50:55.854Z] Copying: 157/1024 [MB] (10 MBps) [2024-11-19T11:50:56.828Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-19T11:50:57.792Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-19T11:50:58.735Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-19T11:50:59.678Z] Copying: 207/1024 [MB] (19 MBps) [2024-11-19T11:51:00.622Z] Copying: 222/1024 [MB] (14 MBps) [2024-11-19T11:51:01.566Z] Copying: 238/1024 [MB] (15 MBps) [2024-11-19T11:51:02.508Z] Copying: 255/1024 [MB] (17 MBps) [2024-11-19T11:51:03.453Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-19T11:51:04.843Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-19T11:51:05.788Z] Copying: 296/1024 [MB] (16 MBps) [2024-11-19T11:51:06.737Z] Copying: 308/1024 [MB] (12 MBps) [2024-11-19T11:51:07.681Z] Copying: 320/1024 [MB] (11 MBps) [2024-11-19T11:51:08.626Z] Copying: 331/1024 [MB] (11 MBps) [2024-11-19T11:51:09.570Z] Copying: 342/1024 [MB] (11 MBps) [2024-11-19T11:51:10.512Z] Copying: 357/1024 [MB] (14 MBps) [2024-11-19T11:51:11.455Z] Copying: 372/1024 [MB] (15 MBps) [2024-11-19T11:51:12.839Z] Copying: 388/1024 [MB] (15 MBps) [2024-11-19T11:51:13.782Z] Copying: 407/1024 [MB] (19 MBps) [2024-11-19T11:51:14.725Z] Copying: 429/1024 [MB] (21 MBps) [2024-11-19T11:51:15.670Z] Copying: 448/1024 [MB] (19 MBps) [2024-11-19T11:51:16.615Z] Copying: 465/1024 [MB] (16 MBps) [2024-11-19T11:51:17.560Z] Copying: 477/1024 [MB] (12 MBps) [2024-11-19T11:51:18.507Z] Copying: 501/1024 [MB] (23 MBps) [2024-11-19T11:51:19.452Z] Copying: 518/1024 [MB] (17 MBps) [2024-11-19T11:51:20.840Z] Copying: 534/1024 [MB] (16 MBps) [2024-11-19T11:51:21.782Z] Copying: 551/1024 [MB] (16 MBps) [2024-11-19T11:51:22.728Z] Copying: 569/1024 [MB] (17 MBps) [2024-11-19T11:51:23.673Z] Copying: 583/1024 [MB] (14 MBps) [2024-11-19T11:51:24.618Z] Copying: 597/1024 [MB] (13 MBps) [2024-11-19T11:51:25.561Z] Copying: 612/1024 [MB] (15 MBps) [2024-11-19T11:51:26.505Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-19T11:51:27.450Z] Copying: 633/1024 [MB] (10 MBps) [2024-11-19T11:51:28.896Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-19T11:51:29.469Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-19T11:51:30.856Z] Copying: 666/1024 [MB] (11 MBps) [2024-11-19T11:51:31.799Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-19T11:51:32.745Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-19T11:51:33.689Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-19T11:51:34.633Z] Copying: 709/1024 [MB] (10 MBps) [2024-11-19T11:51:35.577Z] Copying: 720/1024 [MB] (10 MBps) [2024-11-19T11:51:36.520Z] Copying: 731/1024 [MB] (10 MBps) [2024-11-19T11:51:37.465Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-19T11:51:38.854Z] Copying: 760/1024 [MB] (18 MBps) [2024-11-19T11:51:39.795Z] Copying: 771/1024 [MB] (10 MBps) [2024-11-19T11:51:40.738Z] Copying: 782/1024 [MB] (10 MBps) [2024-11-19T11:51:41.682Z] Copying: 797/1024 [MB] (14 MBps) [2024-11-19T11:51:42.626Z] Copying: 813/1024 [MB] (15 MBps) [2024-11-19T11:51:43.570Z] Copying: 828/1024 [MB] (15 MBps) [2024-11-19T11:51:44.514Z] Copying: 841/1024 [MB] (12 MBps) [2024-11-19T11:51:45.459Z] Copying: 852/1024 [MB] (11 MBps) [2024-11-19T11:51:46.845Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-19T11:51:47.789Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-19T11:51:48.734Z] Copying: 893/1024 [MB] (19 MBps) [2024-11-19T11:51:49.677Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-19T11:51:50.618Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-19T11:51:51.562Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-19T11:51:52.506Z] Copying: 938/1024 [MB] (10 MBps) [2024-11-19T11:51:53.448Z] Copying: 949/1024 [MB] (10 MBps) [2024-11-19T11:51:54.834Z] Copying: 961/1024 [MB] (11 MBps) [2024-11-19T11:51:55.776Z] Copying: 978/1024 [MB] (16 MBps) [2024-11-19T11:51:56.719Z] Copying: 989/1024 [MB] (11 MBps) [2024-11-19T11:51:57.663Z] Copying: 1001/1024 [MB] (11 MBps) [2024-11-19T11:51:58.607Z] Copying: 1012/1024 [MB] (11 MBps) [2024-11-19T11:51:58.607Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-19T11:51:58.870Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-19 11:51:58.649517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.649596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.458 [2024-11-19 11:51:58.649614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.458 [2024-11-19 11:51:58.649624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.649655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.458 [2024-11-19 11:51:58.650458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.650484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.458 [2024-11-19 11:51:58.650495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:20:45.458 [2024-11-19 11:51:58.650505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.650748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.650759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.458 [2024-11-19 11:51:58.650768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:20:45.458 [2024-11-19 11:51:58.650777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.654764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.654809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.458 [2024-11-19 11:51:58.654822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.971 ms 00:20:45.458 [2024-11-19 11:51:58.654832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.664772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.664823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:45.458 [2024-11-19 11:51:58.664837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.915 ms 00:20:45.458 [2024-11-19 11:51:58.664848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.667805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.667855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.458 [2024-11-19 11:51:58.667866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:20:45.458 [2024-11-19 11:51:58.667874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.673265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.673320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.458 [2024-11-19 11:51:58.673343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:20:45.458 [2024-11-19 11:51:58.673355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.673502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.673514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:45.458 [2024-11-19 11:51:58.673524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:45.458 [2024-11-19 11:51:58.673536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.676946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.677003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:45.458 [2024-11-19 11:51:58.677013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.393 ms 00:20:45.458 [2024-11-19 11:51:58.677021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.679908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.679953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:45.458 [2024-11-19 11:51:58.679963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:20:45.458 [2024-11-19 11:51:58.679970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.682565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.682611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.458 [2024-11-19 11:51:58.682621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:20:45.458 [2024-11-19 11:51:58.682628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.684992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.458 [2024-11-19 11:51:58.685042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.458 [2024-11-19 11:51:58.685052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:20:45.458 [2024-11-19 11:51:58.685059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.458 [2024-11-19 11:51:58.685098] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.458 [2024-11-19 11:51:58.685121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.458 [2024-11-19 11:51:58.685311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.459 [2024-11-19 11:51:58.685949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.459 [2024-11-19 11:51:58.685958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 103bcb84-4551-4691-830d-e5d80ae80c8c 00:20:45.459 [2024-11-19 11:51:58.685967] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.459 [2024-11-19 11:51:58.685975] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.459 [2024-11-19 11:51:58.685982] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.459 [2024-11-19 11:51:58.685991] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.459 [2024-11-19 11:51:58.685997] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.459 [2024-11-19 11:51:58.686010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.459 [2024-11-19 11:51:58.686018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.459 [2024-11-19 11:51:58.686025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.459 [2024-11-19 11:51:58.686032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.459 [2024-11-19 11:51:58.686039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.459 [2024-11-19 11:51:58.686048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.459 [2024-11-19 11:51:58.686065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:20:45.459 [2024-11-19 11:51:58.686073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.459 [2024-11-19 11:51:58.688343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.459 [2024-11-19 11:51:58.688380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.459 [2024-11-19 11:51:58.688391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:20:45.460 [2024-11-19 11:51:58.688401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.688544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.460 [2024-11-19 11:51:58.688589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.460 [2024-11-19 11:51:58.688599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:45.460 [2024-11-19 11:51:58.688607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.695271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.695324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.460 [2024-11-19 11:51:58.695334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.695342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.695439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.695452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.460 [2024-11-19 11:51:58.695461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.695469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.695529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.695540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.460 [2024-11-19 11:51:58.695547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.695555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.695573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.695581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.460 [2024-11-19 11:51:58.695592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.695603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.709209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.709274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.460 [2024-11-19 11:51:58.709286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.709295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.720512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.460 [2024-11-19 11:51:58.720527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.720535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.720612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.460 [2024-11-19 11:51:58.720621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.720634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.720680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.460 [2024-11-19 11:51:58.720692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.720704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.720790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.460 [2024-11-19 11:51:58.720802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.720810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.720854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.460 [2024-11-19 11:51:58.720866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.720875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.720928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.460 [2024-11-19 11:51:58.720936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.720944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.720991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.460 [2024-11-19 11:51:58.721003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.460 [2024-11-19 11:51:58.721016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.460 [2024-11-19 11:51:58.721026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.460 [2024-11-19 11:51:58.721162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.612 ms, result 0 00:20:45.721 00:20:45.721 00:20:45.721 11:51:58 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:48.384 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:48.384 11:52:01 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:48.384 [2024-11-19 11:52:01.295099] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:48.384 [2024-11-19 11:52:01.295263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87734 ] 00:20:48.384 [2024-11-19 11:52:01.432914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.384 [2024-11-19 11:52:01.482865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.384 [2024-11-19 11:52:01.597515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.384 [2024-11-19 11:52:01.597608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:48.384 [2024-11-19 11:52:01.757642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.757703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:48.384 [2024-11-19 11:52:01.757721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.384 [2024-11-19 11:52:01.757729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.757788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.757806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.384 [2024-11-19 11:52:01.757815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:48.384 [2024-11-19 11:52:01.757829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.757851] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:48.384 [2024-11-19 11:52:01.758109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:48.384 [2024-11-19 11:52:01.758135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.758143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.384 [2024-11-19 11:52:01.758154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:20:48.384 [2024-11-19 11:52:01.758162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.759876] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:48.384 [2024-11-19 11:52:01.763842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.763895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:48.384 [2024-11-19 11:52:01.763906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.968 ms 00:20:48.384 [2024-11-19 11:52:01.763914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.763996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.764009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:48.384 [2024-11-19 11:52:01.764020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:48.384 [2024-11-19 11:52:01.764033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.772392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.772454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.384 [2024-11-19 11:52:01.772465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.319 ms 00:20:48.384 [2024-11-19 11:52:01.772477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.772605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.772616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.384 [2024-11-19 11:52:01.772626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:48.384 [2024-11-19 11:52:01.772634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.772696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.772706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:48.384 [2024-11-19 11:52:01.772719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:48.384 [2024-11-19 11:52:01.772727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.772756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:48.384 [2024-11-19 11:52:01.774924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.774966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.384 [2024-11-19 11:52:01.774976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:20:48.384 [2024-11-19 11:52:01.774984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.775020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.775029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:48.384 [2024-11-19 11:52:01.775044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:48.384 [2024-11-19 11:52:01.775051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.775076] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:48.384 [2024-11-19 11:52:01.775100] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:48.384 [2024-11-19 11:52:01.775142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:48.384 [2024-11-19 11:52:01.775159] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:48.384 [2024-11-19 11:52:01.775264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:48.384 [2024-11-19 11:52:01.775275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:48.384 [2024-11-19 11:52:01.775290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:48.384 [2024-11-19 11:52:01.775303] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775312] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775321] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:48.384 [2024-11-19 11:52:01.775329] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:48.384 [2024-11-19 11:52:01.775338] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:48.384 [2024-11-19 11:52:01.775345] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:48.384 [2024-11-19 11:52:01.775353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.775360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:48.384 [2024-11-19 11:52:01.775369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:48.384 [2024-11-19 11:52:01.775380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.775507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.384 [2024-11-19 11:52:01.775526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:48.384 [2024-11-19 11:52:01.775533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:48.384 [2024-11-19 11:52:01.775540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.384 [2024-11-19 11:52:01.775638] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:48.384 [2024-11-19 11:52:01.775650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:48.384 [2024-11-19 11:52:01.775659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:48.384 [2024-11-19 11:52:01.775691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:48.384 [2024-11-19 11:52:01.775714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.384 [2024-11-19 11:52:01.775730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:48.384 [2024-11-19 11:52:01.775738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:48.384 [2024-11-19 11:52:01.775748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:48.384 [2024-11-19 11:52:01.775757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:48.384 [2024-11-19 11:52:01.775768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:48.384 [2024-11-19 11:52:01.775776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:48.384 [2024-11-19 11:52:01.775792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:48.384 [2024-11-19 11:52:01.775816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:48.384 [2024-11-19 11:52:01.775839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:48.384 [2024-11-19 11:52:01.775862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:48.384 [2024-11-19 11:52:01.775889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:48.384 [2024-11-19 11:52:01.775905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:48.384 [2024-11-19 11:52:01.775913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.384 [2024-11-19 11:52:01.775928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:48.384 [2024-11-19 11:52:01.775936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:48.384 [2024-11-19 11:52:01.775943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:48.384 [2024-11-19 11:52:01.775951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:48.384 [2024-11-19 11:52:01.775959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:48.384 [2024-11-19 11:52:01.775966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:48.384 [2024-11-19 11:52:01.775981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:48.384 [2024-11-19 11:52:01.775988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.775996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:48.384 [2024-11-19 11:52:01.776009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:48.384 [2024-11-19 11:52:01.776020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:48.384 [2024-11-19 11:52:01.776029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:48.384 [2024-11-19 11:52:01.776039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:48.384 [2024-11-19 11:52:01.776047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:48.384 [2024-11-19 11:52:01.776055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:48.384 [2024-11-19 11:52:01.776063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:48.384 [2024-11-19 11:52:01.776069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:48.384 [2024-11-19 11:52:01.776078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:48.384 [2024-11-19 11:52:01.776086] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:48.384 [2024-11-19 11:52:01.776096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.384 [2024-11-19 11:52:01.776104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:48.384 [2024-11-19 11:52:01.776112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:48.384 [2024-11-19 11:52:01.776118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:48.384 [2024-11-19 11:52:01.776125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:48.384 [2024-11-19 11:52:01.776132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:48.384 [2024-11-19 11:52:01.776141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:48.384 [2024-11-19 11:52:01.776149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:48.384 [2024-11-19 11:52:01.776156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:48.384 [2024-11-19 11:52:01.776163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:48.385 [2024-11-19 11:52:01.776176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:48.385 [2024-11-19 11:52:01.776183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:48.385 [2024-11-19 11:52:01.776190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:48.385 [2024-11-19 11:52:01.776196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:48.385 [2024-11-19 11:52:01.776205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:48.385 [2024-11-19 11:52:01.776215] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:48.385 [2024-11-19 11:52:01.776223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:48.385 [2024-11-19 11:52:01.776231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:48.385 [2024-11-19 11:52:01.776238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:48.385 [2024-11-19 11:52:01.776245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:48.385 [2024-11-19 11:52:01.776253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:48.385 [2024-11-19 11:52:01.776261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.385 [2024-11-19 11:52:01.776270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:48.385 [2024-11-19 11:52:01.776278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:20:48.385 [2024-11-19 11:52:01.776286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.804108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.804195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.646 [2024-11-19 11:52:01.804223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.769 ms 00:20:48.646 [2024-11-19 11:52:01.804239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.804436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.804456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:48.646 [2024-11-19 11:52:01.804473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:20:48.646 [2024-11-19 11:52:01.804497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.816986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.817034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.646 [2024-11-19 11:52:01.817045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.363 ms 00:20:48.646 [2024-11-19 11:52:01.817054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.817091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.817100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.646 [2024-11-19 11:52:01.817109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:48.646 [2024-11-19 11:52:01.817117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.817728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.817762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.646 [2024-11-19 11:52:01.817775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:48.646 [2024-11-19 11:52:01.817788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.817941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.817952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.646 [2024-11-19 11:52:01.817964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:48.646 [2024-11-19 11:52:01.817977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.825069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.825120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.646 [2024-11-19 11:52:01.825130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.066 ms 00:20:48.646 [2024-11-19 11:52:01.825138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.829064] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:48.646 [2024-11-19 11:52:01.829123] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:48.646 [2024-11-19 11:52:01.829135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.829144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:48.646 [2024-11-19 11:52:01.829153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.903 ms 00:20:48.646 [2024-11-19 11:52:01.829160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.845000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.845053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:48.646 [2024-11-19 11:52:01.845064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.786 ms 00:20:48.646 [2024-11-19 11:52:01.845072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.848047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.848091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:48.646 [2024-11-19 11:52:01.848101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:20:48.646 [2024-11-19 11:52:01.848108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.850711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.850755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:48.646 [2024-11-19 11:52:01.850765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:20:48.646 [2024-11-19 11:52:01.850772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.851120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.646 [2024-11-19 11:52:01.851134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:48.646 [2024-11-19 11:52:01.851144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:48.646 [2024-11-19 11:52:01.851151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.646 [2024-11-19 11:52:01.875906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.875963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:48.647 [2024-11-19 11:52:01.875975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.732 ms 00:20:48.647 [2024-11-19 11:52:01.875984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.884529] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:48.647 [2024-11-19 11:52:01.887453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.887498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:48.647 [2024-11-19 11:52:01.887514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.416 ms 00:20:48.647 [2024-11-19 11:52:01.887523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.887601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.887613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:48.647 [2024-11-19 11:52:01.887622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:48.647 [2024-11-19 11:52:01.887633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.887702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.887712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:48.647 [2024-11-19 11:52:01.887723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:48.647 [2024-11-19 11:52:01.887731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.887763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.887772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:48.647 [2024-11-19 11:52:01.887781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:48.647 [2024-11-19 11:52:01.887789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.887823] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:48.647 [2024-11-19 11:52:01.887837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.887848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:48.647 [2024-11-19 11:52:01.887856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:48.647 [2024-11-19 11:52:01.887866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.893166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.893214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:48.647 [2024-11-19 11:52:01.893224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.281 ms 00:20:48.647 [2024-11-19 11:52:01.893233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.893317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.647 [2024-11-19 11:52:01.893327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:48.647 [2024-11-19 11:52:01.893336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:48.647 [2024-11-19 11:52:01.893347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.647 [2024-11-19 11:52:01.894698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.554 ms, result 0 00:20:49.588  [2024-11-19T11:52:03.940Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-19T11:52:05.327Z] Copying: 28/1024 [MB] (12 MBps) [2024-11-19T11:52:06.269Z] Copying: 47/1024 [MB] (19 MBps) [2024-11-19T11:52:07.215Z] Copying: 64/1024 [MB] (16 MBps) [2024-11-19T11:52:08.158Z] Copying: 74/1024 [MB] (10 MBps) [2024-11-19T11:52:09.102Z] Copying: 106/1024 [MB] (31 MBps) [2024-11-19T11:52:10.054Z] Copying: 147/1024 [MB] (41 MBps) [2024-11-19T11:52:10.996Z] Copying: 160/1024 [MB] (13 MBps) [2024-11-19T11:52:11.939Z] Copying: 181/1024 [MB] (20 MBps) [2024-11-19T11:52:13.325Z] Copying: 222/1024 [MB] (40 MBps) [2024-11-19T11:52:14.272Z] Copying: 239/1024 [MB] (17 MBps) [2024-11-19T11:52:15.218Z] Copying: 253/1024 [MB] (14 MBps) [2024-11-19T11:52:16.164Z] Copying: 267/1024 [MB] (14 MBps) [2024-11-19T11:52:17.132Z] Copying: 284/1024 [MB] (16 MBps) [2024-11-19T11:52:18.076Z] Copying: 296/1024 [MB] (11 MBps) [2024-11-19T11:52:19.023Z] Copying: 316/1024 [MB] (20 MBps) [2024-11-19T11:52:19.967Z] Copying: 333/1024 [MB] (16 MBps) [2024-11-19T11:52:20.913Z] Copying: 349/1024 [MB] (16 MBps) [2024-11-19T11:52:22.303Z] Copying: 368/1024 [MB] (19 MBps) [2024-11-19T11:52:23.248Z] Copying: 383/1024 [MB] (14 MBps) [2024-11-19T11:52:24.193Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-19T11:52:25.137Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-19T11:52:26.082Z] Copying: 414/1024 [MB] (10 MBps) [2024-11-19T11:52:27.027Z] Copying: 445/1024 [MB] (31 MBps) [2024-11-19T11:52:27.973Z] Copying: 466088/1048576 [kB] (9928 kBps) [2024-11-19T11:52:28.916Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-19T11:52:30.303Z] Copying: 497/1024 [MB] (32 MBps) [2024-11-19T11:52:31.247Z] Copying: 542/1024 [MB] (44 MBps) [2024-11-19T11:52:32.193Z] Copying: 562/1024 [MB] (19 MBps) [2024-11-19T11:52:33.138Z] Copying: 580/1024 [MB] (18 MBps) [2024-11-19T11:52:34.084Z] Copying: 600/1024 [MB] (19 MBps) [2024-11-19T11:52:35.028Z] Copying: 613/1024 [MB] (13 MBps) [2024-11-19T11:52:35.970Z] Copying: 625/1024 [MB] (12 MBps) [2024-11-19T11:52:36.916Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-19T11:52:38.305Z] Copying: 650/1024 [MB] (15 MBps) [2024-11-19T11:52:39.250Z] Copying: 662/1024 [MB] (12 MBps) [2024-11-19T11:52:40.193Z] Copying: 680/1024 [MB] (17 MBps) [2024-11-19T11:52:41.137Z] Copying: 692/1024 [MB] (12 MBps) [2024-11-19T11:52:42.083Z] Copying: 705/1024 [MB] (12 MBps) [2024-11-19T11:52:43.028Z] Copying: 718/1024 [MB] (13 MBps) [2024-11-19T11:52:43.975Z] Copying: 733/1024 [MB] (14 MBps) [2024-11-19T11:52:44.921Z] Copying: 754/1024 [MB] (21 MBps) [2024-11-19T11:52:46.311Z] Copying: 776/1024 [MB] (21 MBps) [2024-11-19T11:52:47.255Z] Copying: 793/1024 [MB] (16 MBps) [2024-11-19T11:52:48.201Z] Copying: 803/1024 [MB] (10 MBps) [2024-11-19T11:52:49.231Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-19T11:52:50.173Z] Copying: 824/1024 [MB] (10 MBps) [2024-11-19T11:52:51.119Z] Copying: 834/1024 [MB] (10 MBps) [2024-11-19T11:52:52.065Z] Copying: 864464/1048576 [kB] (10208 kBps) [2024-11-19T11:52:53.012Z] Copying: 854/1024 [MB] (10 MBps) [2024-11-19T11:52:53.960Z] Copying: 864/1024 [MB] (10 MBps) [2024-11-19T11:52:55.347Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-19T11:52:55.920Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-19T11:52:57.305Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-19T11:52:58.247Z] Copying: 906/1024 [MB] (10 MBps) [2024-11-19T11:52:59.193Z] Copying: 916/1024 [MB] (10 MBps) [2024-11-19T11:53:00.136Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-19T11:53:01.078Z] Copying: 937/1024 [MB] (10 MBps) [2024-11-19T11:53:02.024Z] Copying: 947/1024 [MB] (10 MBps) [2024-11-19T11:53:02.969Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-19T11:53:03.915Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-19T11:53:05.305Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-19T11:53:06.251Z] Copying: 990/1024 [MB] (10 MBps) [2024-11-19T11:53:07.197Z] Copying: 1000/1024 [MB] (10 MBps) [2024-11-19T11:53:08.142Z] Copying: 1010/1024 [MB] (10 MBps) [2024-11-19T11:53:09.085Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-19T11:53:09.085Z] Copying: 1048540/1048576 [kB] (2548 kBps) [2024-11-19T11:53:09.085Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 11:53:08.958856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:08.958929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:55.673 [2024-11-19 11:53:08.958947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:55.673 [2024-11-19 11:53:08.958968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-11-19 11:53:08.960234] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.673 [2024-11-19 11:53:08.963982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:08.964033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:55.673 [2024-11-19 11:53:08.964046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:21:55.673 [2024-11-19 11:53:08.964065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-11-19 11:53:08.976773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:08.976822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:55.673 [2024-11-19 11:53:08.976835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.619 ms 00:21:55.673 [2024-11-19 11:53:08.976843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-11-19 11:53:09.001508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:09.001712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:55.673 [2024-11-19 11:53:09.001736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.645 ms 00:21:55.673 [2024-11-19 11:53:09.001745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-11-19 11:53:09.007952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:09.007996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:55.673 [2024-11-19 11:53:09.008008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:21:55.673 [2024-11-19 11:53:09.008016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-11-19 11:53:09.011073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:09.011126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:55.673 [2024-11-19 11:53:09.011139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:21:55.673 [2024-11-19 11:53:09.011148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.673 [2024-11-19 11:53:09.016471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.673 [2024-11-19 11:53:09.016673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:55.673 [2024-11-19 11:53:09.016693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.279 ms 00:21:55.673 [2024-11-19 11:53:09.016701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.937 [2024-11-19 11:53:09.290878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.937 [2024-11-19 11:53:09.290945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:55.937 [2024-11-19 11:53:09.290960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 274.122 ms 00:21:55.937 [2024-11-19 11:53:09.290978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.937 [2024-11-19 11:53:09.294523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.937 [2024-11-19 11:53:09.294571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:55.937 [2024-11-19 11:53:09.294581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:21:55.937 [2024-11-19 11:53:09.294588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.937 [2024-11-19 11:53:09.297669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.937 [2024-11-19 11:53:09.297716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:55.937 [2024-11-19 11:53:09.297726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:21:55.937 [2024-11-19 11:53:09.297733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.937 [2024-11-19 11:53:09.300005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.937 [2024-11-19 11:53:09.300186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:55.937 [2024-11-19 11:53:09.300205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:21:55.937 [2024-11-19 11:53:09.300213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.937 [2024-11-19 11:53:09.302691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.937 [2024-11-19 11:53:09.302738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:55.937 [2024-11-19 11:53:09.302748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:21:55.937 [2024-11-19 11:53:09.302755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.937 [2024-11-19 11:53:09.302796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:55.937 [2024-11-19 11:53:09.302812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97280 / 261120 wr_cnt: 1 state: open 00:21:55.937 [2024-11-19 11:53:09.302822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:55.937 [2024-11-19 11:53:09.302971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.302979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.302986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.302993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.303947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:55.938 [2024-11-19 11:53:09.304456] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:55.938 [2024-11-19 11:53:09.304465] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 103bcb84-4551-4691-830d-e5d80ae80c8c 00:21:55.938 [2024-11-19 11:53:09.304474] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97280 00:21:55.938 [2024-11-19 11:53:09.304482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 98240 00:21:55.938 [2024-11-19 11:53:09.304501] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97280 00:21:55.938 [2024-11-19 11:53:09.304510] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:21:55.938 [2024-11-19 11:53:09.304518] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:55.938 [2024-11-19 11:53:09.304526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:55.938 [2024-11-19 11:53:09.304534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:55.938 [2024-11-19 11:53:09.304541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:55.939 [2024-11-19 11:53:09.304549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:55.939 [2024-11-19 11:53:09.304558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.939 [2024-11-19 11:53:09.304566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:55.939 [2024-11-19 11:53:09.304575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:21:55.939 [2024-11-19 11:53:09.304598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.306866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.939 [2024-11-19 11:53:09.306904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:55.939 [2024-11-19 11:53:09.306914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:21:55.939 [2024-11-19 11:53:09.306923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.307042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.939 [2024-11-19 11:53:09.307051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:55.939 [2024-11-19 11:53:09.307060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:55.939 [2024-11-19 11:53:09.307067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.314038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.314206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.939 [2024-11-19 11:53:09.314263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.314286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.314357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.314379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.939 [2024-11-19 11:53:09.314399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.314457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.314536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.314638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.939 [2024-11-19 11:53:09.314665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.314684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.314716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.314748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.939 [2024-11-19 11:53:09.314769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.314788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.328091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.328276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.939 [2024-11-19 11:53:09.328293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.328310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.939 [2024-11-19 11:53:09.338258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.939 [2024-11-19 11:53:09.338337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.939 [2024-11-19 11:53:09.338397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.939 [2024-11-19 11:53:09.338587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.939 [2024-11-19 11:53:09.338654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.939 [2024-11-19 11:53:09.338721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.939 [2024-11-19 11:53:09.338784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.939 [2024-11-19 11:53:09.338792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.939 [2024-11-19 11:53:09.338799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.939 [2024-11-19 11:53:09.338926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.971 ms, result 0 00:21:56.884 00:21:56.884 00:21:56.884 11:53:10 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:56.884 [2024-11-19 11:53:10.255781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:56.884 [2024-11-19 11:53:10.255933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88444 ] 00:21:57.146 [2024-11-19 11:53:10.393825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.146 [2024-11-19 11:53:10.444556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.146 [2024-11-19 11:53:10.555956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:57.146 [2024-11-19 11:53:10.556043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:57.409 [2024-11-19 11:53:10.718038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.718303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:57.409 [2024-11-19 11:53:10.718334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:57.409 [2024-11-19 11:53:10.718344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.718443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.718456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.409 [2024-11-19 11:53:10.718466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:57.409 [2024-11-19 11:53:10.718481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.718505] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:57.409 [2024-11-19 11:53:10.718765] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:57.409 [2024-11-19 11:53:10.718783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.718792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.409 [2024-11-19 11:53:10.718809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:57.409 [2024-11-19 11:53:10.718821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.720527] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:57.409 [2024-11-19 11:53:10.724294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.724346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:57.409 [2024-11-19 11:53:10.724366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.768 ms 00:21:57.409 [2024-11-19 11:53:10.724374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.724474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.724488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:57.409 [2024-11-19 11:53:10.724497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:57.409 [2024-11-19 11:53:10.724504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.732641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.732684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.409 [2024-11-19 11:53:10.732696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.093 ms 00:21:57.409 [2024-11-19 11:53:10.732704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.732824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.732835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.409 [2024-11-19 11:53:10.732844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:21:57.409 [2024-11-19 11:53:10.732855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.732913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.409 [2024-11-19 11:53:10.732926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:57.409 [2024-11-19 11:53:10.732935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:57.409 [2024-11-19 11:53:10.732943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.409 [2024-11-19 11:53:10.732965] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:57.410 [2024-11-19 11:53:10.735083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.410 [2024-11-19 11:53:10.735123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.410 [2024-11-19 11:53:10.735134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:21:57.410 [2024-11-19 11:53:10.735148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.410 [2024-11-19 11:53:10.735184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.410 [2024-11-19 11:53:10.735192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:57.410 [2024-11-19 11:53:10.735201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:57.410 [2024-11-19 11:53:10.735209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.410 [2024-11-19 11:53:10.735235] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:57.410 [2024-11-19 11:53:10.735260] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:57.410 [2024-11-19 11:53:10.735302] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:57.410 [2024-11-19 11:53:10.735327] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:57.410 [2024-11-19 11:53:10.735448] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:57.410 [2024-11-19 11:53:10.735460] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:57.410 [2024-11-19 11:53:10.735471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:57.410 [2024-11-19 11:53:10.735482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:57.410 [2024-11-19 11:53:10.735494] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:57.410 [2024-11-19 11:53:10.735506] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:57.410 [2024-11-19 11:53:10.735513] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:57.410 [2024-11-19 11:53:10.735525] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:57.410 [2024-11-19 11:53:10.735536] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:57.410 [2024-11-19 11:53:10.735544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.410 [2024-11-19 11:53:10.735553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:57.410 [2024-11-19 11:53:10.735564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:21:57.410 [2024-11-19 11:53:10.735572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.410 [2024-11-19 11:53:10.735665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.410 [2024-11-19 11:53:10.735677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:57.410 [2024-11-19 11:53:10.735690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:57.410 [2024-11-19 11:53:10.735698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.410 [2024-11-19 11:53:10.735796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:57.410 [2024-11-19 11:53:10.735807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:57.410 [2024-11-19 11:53:10.735816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.410 [2024-11-19 11:53:10.735826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.735835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:57.410 [2024-11-19 11:53:10.735843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.735852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:57.410 [2024-11-19 11:53:10.735861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:57.410 [2024-11-19 11:53:10.735869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:57.410 [2024-11-19 11:53:10.735876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.410 [2024-11-19 11:53:10.735884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:57.410 [2024-11-19 11:53:10.735891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:57.410 [2024-11-19 11:53:10.735899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.410 [2024-11-19 11:53:10.735911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:57.410 [2024-11-19 11:53:10.735919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:57.410 [2024-11-19 11:53:10.735927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.735935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:57.410 [2024-11-19 11:53:10.735945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:57.410 [2024-11-19 11:53:10.735953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.735961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:57.410 [2024-11-19 11:53:10.735969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:57.410 [2024-11-19 11:53:10.735976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.410 [2024-11-19 11:53:10.735984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:57.410 [2024-11-19 11:53:10.735992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.410 [2024-11-19 11:53:10.736008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:57.410 [2024-11-19 11:53:10.736016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.410 [2024-11-19 11:53:10.736031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:57.410 [2024-11-19 11:53:10.736042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.410 [2024-11-19 11:53:10.736058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:57.410 [2024-11-19 11:53:10.736065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.410 [2024-11-19 11:53:10.736081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:57.410 [2024-11-19 11:53:10.736088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:57.410 [2024-11-19 11:53:10.736096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.410 [2024-11-19 11:53:10.736103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:57.410 [2024-11-19 11:53:10.736111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:57.410 [2024-11-19 11:53:10.736119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:57.410 [2024-11-19 11:53:10.736134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:57.410 [2024-11-19 11:53:10.736142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736150] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:57.410 [2024-11-19 11:53:10.736159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:57.410 [2024-11-19 11:53:10.736173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.410 [2024-11-19 11:53:10.736183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.410 [2024-11-19 11:53:10.736192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:57.410 [2024-11-19 11:53:10.736200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:57.410 [2024-11-19 11:53:10.736210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:57.410 [2024-11-19 11:53:10.736218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:57.410 [2024-11-19 11:53:10.736226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:57.410 [2024-11-19 11:53:10.736235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:57.410 [2024-11-19 11:53:10.736244] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:57.410 [2024-11-19 11:53:10.736254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.410 [2024-11-19 11:53:10.736265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:57.410 [2024-11-19 11:53:10.736273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:57.410 [2024-11-19 11:53:10.736281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:57.410 [2024-11-19 11:53:10.736288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:57.410 [2024-11-19 11:53:10.736295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:57.410 [2024-11-19 11:53:10.736302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:57.410 [2024-11-19 11:53:10.736312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:57.410 [2024-11-19 11:53:10.736319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:57.410 [2024-11-19 11:53:10.736327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:57.410 [2024-11-19 11:53:10.736341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:57.410 [2024-11-19 11:53:10.736348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:57.410 [2024-11-19 11:53:10.736355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:57.410 [2024-11-19 11:53:10.736363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:57.410 [2024-11-19 11:53:10.736370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:57.411 [2024-11-19 11:53:10.736377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:57.411 [2024-11-19 11:53:10.736385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.411 [2024-11-19 11:53:10.736394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:57.411 [2024-11-19 11:53:10.736401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:57.411 [2024-11-19 11:53:10.736447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:57.411 [2024-11-19 11:53:10.736456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:57.411 [2024-11-19 11:53:10.736463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.736471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:57.411 [2024-11-19 11:53:10.736482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:21:57.411 [2024-11-19 11:53:10.736493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.758561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.758631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.411 [2024-11-19 11:53:10.758651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.016 ms 00:21:57.411 [2024-11-19 11:53:10.758666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.758796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.758810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:57.411 [2024-11-19 11:53:10.758821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:57.411 [2024-11-19 11:53:10.758837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.771075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.771129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.411 [2024-11-19 11:53:10.771140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.161 ms 00:21:57.411 [2024-11-19 11:53:10.771147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.771183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.771191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.411 [2024-11-19 11:53:10.771200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:57.411 [2024-11-19 11:53:10.771208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.771805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.771849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.411 [2024-11-19 11:53:10.771860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:21:57.411 [2024-11-19 11:53:10.771868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.772028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.772038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.411 [2024-11-19 11:53:10.772048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:57.411 [2024-11-19 11:53:10.772057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.779143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.779190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.411 [2024-11-19 11:53:10.779207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.061 ms 00:21:57.411 [2024-11-19 11:53:10.779215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.783276] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:57.411 [2024-11-19 11:53:10.783330] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:57.411 [2024-11-19 11:53:10.783346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.783355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:57.411 [2024-11-19 11:53:10.783364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:21:57.411 [2024-11-19 11:53:10.783371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.801715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.801793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:57.411 [2024-11-19 11:53:10.801809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.253 ms 00:21:57.411 [2024-11-19 11:53:10.801822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.804883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.805079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:57.411 [2024-11-19 11:53:10.805098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.007 ms 00:21:57.411 [2024-11-19 11:53:10.805106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.807915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.807962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:57.411 [2024-11-19 11:53:10.807972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:21:57.411 [2024-11-19 11:53:10.807980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.411 [2024-11-19 11:53:10.808326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.411 [2024-11-19 11:53:10.808341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:57.411 [2024-11-19 11:53:10.808351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:21:57.411 [2024-11-19 11:53:10.808358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.831884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.832090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:57.673 [2024-11-19 11:53:10.832150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.508 ms 00:21:57.673 [2024-11-19 11:53:10.832176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.840316] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:57.673 [2024-11-19 11:53:10.843352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.843500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:57.673 [2024-11-19 11:53:10.843526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.124 ms 00:21:57.673 [2024-11-19 11:53:10.843540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.843614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.843628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:57.673 [2024-11-19 11:53:10.843637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:57.673 [2024-11-19 11:53:10.843645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.845246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.845290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:57.673 [2024-11-19 11:53:10.845308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:21:57.673 [2024-11-19 11:53:10.845321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.845353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.845368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:57.673 [2024-11-19 11:53:10.845377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:57.673 [2024-11-19 11:53:10.845385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.845449] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:57.673 [2024-11-19 11:53:10.845461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.845469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:57.673 [2024-11-19 11:53:10.845478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:57.673 [2024-11-19 11:53:10.845486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.850813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.850979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:57.673 [2024-11-19 11:53:10.850999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.302 ms 00:21:57.673 [2024-11-19 11:53:10.851007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.851082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.673 [2024-11-19 11:53:10.851092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:57.673 [2024-11-19 11:53:10.851101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:57.673 [2024-11-19 11:53:10.851109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.673 [2024-11-19 11:53:10.852390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.899 ms, result 0 00:21:59.063  [2024-11-19T11:53:13.049Z] Copying: 7656/1048576 [kB] (7656 kBps) [2024-11-19T11:53:14.436Z] Copying: 18/1024 [MB] (10 MBps) [2024-11-19T11:53:15.380Z] Copying: 33/1024 [MB] (15 MBps) [2024-11-19T11:53:16.333Z] Copying: 44/1024 [MB] (10 MBps) [2024-11-19T11:53:17.278Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-19T11:53:18.222Z] Copying: 80/1024 [MB] (21 MBps) [2024-11-19T11:53:19.167Z] Copying: 97/1024 [MB] (17 MBps) [2024-11-19T11:53:20.120Z] Copying: 117/1024 [MB] (20 MBps) [2024-11-19T11:53:21.065Z] Copying: 130/1024 [MB] (12 MBps) [2024-11-19T11:53:22.453Z] Copying: 151/1024 [MB] (20 MBps) [2024-11-19T11:53:23.400Z] Copying: 169/1024 [MB] (17 MBps) [2024-11-19T11:53:24.344Z] Copying: 185/1024 [MB] (15 MBps) [2024-11-19T11:53:25.289Z] Copying: 201/1024 [MB] (16 MBps) [2024-11-19T11:53:26.234Z] Copying: 220/1024 [MB] (19 MBps) [2024-11-19T11:53:27.179Z] Copying: 236/1024 [MB] (16 MBps) [2024-11-19T11:53:28.171Z] Copying: 251/1024 [MB] (14 MBps) [2024-11-19T11:53:29.116Z] Copying: 261/1024 [MB] (10 MBps) [2024-11-19T11:53:30.061Z] Copying: 272/1024 [MB] (10 MBps) [2024-11-19T11:53:31.448Z] Copying: 282/1024 [MB] (10 MBps) [2024-11-19T11:53:32.394Z] Copying: 292/1024 [MB] (10 MBps) [2024-11-19T11:53:33.339Z] Copying: 303/1024 [MB] (10 MBps) [2024-11-19T11:53:34.283Z] Copying: 322/1024 [MB] (18 MBps) [2024-11-19T11:53:35.228Z] Copying: 339/1024 [MB] (16 MBps) [2024-11-19T11:53:36.176Z] Copying: 352/1024 [MB] (13 MBps) [2024-11-19T11:53:37.120Z] Copying: 365/1024 [MB] (13 MBps) [2024-11-19T11:53:38.061Z] Copying: 382/1024 [MB] (17 MBps) [2024-11-19T11:53:39.507Z] Copying: 394/1024 [MB] (12 MBps) [2024-11-19T11:53:40.078Z] Copying: 409/1024 [MB] (14 MBps) [2024-11-19T11:53:41.465Z] Copying: 425/1024 [MB] (16 MBps) [2024-11-19T11:53:42.407Z] Copying: 440/1024 [MB] (14 MBps) [2024-11-19T11:53:43.346Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-19T11:53:44.288Z] Copying: 466/1024 [MB] (14 MBps) [2024-11-19T11:53:45.225Z] Copying: 481/1024 [MB] (15 MBps) [2024-11-19T11:53:46.167Z] Copying: 495/1024 [MB] (13 MBps) [2024-11-19T11:53:47.109Z] Copying: 508/1024 [MB] (13 MBps) [2024-11-19T11:53:48.053Z] Copying: 526/1024 [MB] (18 MBps) [2024-11-19T11:53:49.440Z] Copying: 547/1024 [MB] (20 MBps) [2024-11-19T11:53:50.382Z] Copying: 561/1024 [MB] (14 MBps) [2024-11-19T11:53:51.328Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-19T11:53:52.272Z] Copying: 588/1024 [MB] (15 MBps) [2024-11-19T11:53:53.215Z] Copying: 612036/1048576 [kB] (9540 kBps) [2024-11-19T11:53:54.158Z] Copying: 612/1024 [MB] (14 MBps) [2024-11-19T11:53:55.101Z] Copying: 625/1024 [MB] (13 MBps) [2024-11-19T11:53:56.048Z] Copying: 640/1024 [MB] (14 MBps) [2024-11-19T11:53:57.435Z] Copying: 654/1024 [MB] (14 MBps) [2024-11-19T11:53:58.379Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-19T11:53:59.325Z] Copying: 678/1024 [MB] (14 MBps) [2024-11-19T11:54:00.268Z] Copying: 704520/1048576 [kB] (9416 kBps) [2024-11-19T11:54:01.208Z] Copying: 713804/1048576 [kB] (9284 kBps) [2024-11-19T11:54:02.153Z] Copying: 709/1024 [MB] (12 MBps) [2024-11-19T11:54:03.095Z] Copying: 721/1024 [MB] (11 MBps) [2024-11-19T11:54:04.053Z] Copying: 738/1024 [MB] (17 MBps) [2024-11-19T11:54:05.436Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-19T11:54:06.376Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-19T11:54:07.320Z] Copying: 770/1024 [MB] (10 MBps) [2024-11-19T11:54:08.264Z] Copying: 781/1024 [MB] (10 MBps) [2024-11-19T11:54:09.209Z] Copying: 791/1024 [MB] (10 MBps) [2024-11-19T11:54:10.150Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-19T11:54:11.093Z] Copying: 812/1024 [MB] (10 MBps) [2024-11-19T11:54:12.479Z] Copying: 841220/1048576 [kB] (9684 kBps) [2024-11-19T11:54:13.052Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-19T11:54:14.446Z] Copying: 861724/1048576 [kB] (10236 kBps) [2024-11-19T11:54:15.391Z] Copying: 852/1024 [MB] (10 MBps) [2024-11-19T11:54:16.336Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-19T11:54:17.282Z] Copying: 894096/1048576 [kB] (10208 kBps) [2024-11-19T11:54:18.225Z] Copying: 904256/1048576 [kB] (10160 kBps) [2024-11-19T11:54:19.165Z] Copying: 893/1024 [MB] (10 MBps) [2024-11-19T11:54:20.104Z] Copying: 907/1024 [MB] (13 MBps) [2024-11-19T11:54:21.049Z] Copying: 918/1024 [MB] (11 MBps) [2024-11-19T11:54:22.436Z] Copying: 950212/1048576 [kB] (10060 kBps) [2024-11-19T11:54:23.379Z] Copying: 960032/1048576 [kB] (9820 kBps) [2024-11-19T11:54:24.320Z] Copying: 948/1024 [MB] (10 MBps) [2024-11-19T11:54:25.278Z] Copying: 981068/1048576 [kB] (9868 kBps) [2024-11-19T11:54:26.220Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-19T11:54:27.162Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-19T11:54:28.105Z] Copying: 1012736/1048576 [kB] (10188 kBps) [2024-11-19T11:54:29.047Z] Copying: 1021980/1048576 [kB] (9244 kBps) [2024-11-19T11:54:30.461Z] Copying: 1031068/1048576 [kB] (9088 kBps) [2024-11-19T11:54:31.035Z] Copying: 1040520/1048576 [kB] (9452 kBps) [2024-11-19T11:54:31.297Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-19 11:54:31.099918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.100623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:17.885 [2024-11-19 11:54:31.100913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:17.885 [2024-11-19 11:54:31.100963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.885 [2024-11-19 11:54:31.101099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:17.885 [2024-11-19 11:54:31.102065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.102255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:17.885 [2024-11-19 11:54:31.102372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:23:17.885 [2024-11-19 11:54:31.102449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.885 [2024-11-19 11:54:31.102871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.103047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:17.885 [2024-11-19 11:54:31.103090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:23:17.885 [2024-11-19 11:54:31.103126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.885 [2024-11-19 11:54:31.111188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.111348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:17.885 [2024-11-19 11:54:31.111428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.012 ms 00:23:17.885 [2024-11-19 11:54:31.111463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.885 [2024-11-19 11:54:31.117995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.118155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:17.885 [2024-11-19 11:54:31.118226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.298 ms 00:23:17.885 [2024-11-19 11:54:31.118250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.885 [2024-11-19 11:54:31.121121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.121363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:17.885 [2024-11-19 11:54:31.121386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.819 ms 00:23:17.885 [2024-11-19 11:54:31.121394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.885 [2024-11-19 11:54:31.125917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.885 [2024-11-19 11:54:31.125953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:17.885 [2024-11-19 11:54:31.125963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:23:17.885 [2024-11-19 11:54:31.125972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.148 [2024-11-19 11:54:31.493862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.148 [2024-11-19 11:54:31.493954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:18.148 [2024-11-19 11:54:31.493975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 367.858 ms 00:23:18.148 [2024-11-19 11:54:31.493983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.148 [2024-11-19 11:54:31.497837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.148 [2024-11-19 11:54:31.497878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:18.148 [2024-11-19 11:54:31.497889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.834 ms 00:23:18.148 [2024-11-19 11:54:31.497897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.148 [2024-11-19 11:54:31.500765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.148 [2024-11-19 11:54:31.500827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:18.148 [2024-11-19 11:54:31.500852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:23:18.148 [2024-11-19 11:54:31.500959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.148 [2024-11-19 11:54:31.503636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.148 [2024-11-19 11:54:31.503816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:18.148 [2024-11-19 11:54:31.503888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:23:18.148 [2024-11-19 11:54:31.503913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.148 [2024-11-19 11:54:31.505944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.148 [2024-11-19 11:54:31.506091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:18.148 [2024-11-19 11:54:31.506152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:23:18.148 [2024-11-19 11:54:31.506174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.148 [2024-11-19 11:54:31.506219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:18.148 [2024-11-19 11:54:31.506248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:18.148 [2024-11-19 11:54:31.506280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:18.148 [2024-11-19 11:54:31.506308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:18.148 [2024-11-19 11:54:31.506387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:18.148 [2024-11-19 11:54:31.506436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.506967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.507988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.508997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:18.149 [2024-11-19 11:54:31.509212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:18.150 [2024-11-19 11:54:31.509289] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:18.150 [2024-11-19 11:54:31.509297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 103bcb84-4551-4691-830d-e5d80ae80c8c 00:23:18.150 [2024-11-19 11:54:31.509306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:18.150 [2024-11-19 11:54:31.509313] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 34752 00:23:18.150 [2024-11-19 11:54:31.509320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 33792 00:23:18.150 [2024-11-19 11:54:31.509329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0284 00:23:18.150 [2024-11-19 11:54:31.509344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:18.150 [2024-11-19 11:54:31.509351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:18.150 [2024-11-19 11:54:31.509359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:18.150 [2024-11-19 11:54:31.509366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:18.150 [2024-11-19 11:54:31.509373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:18.150 [2024-11-19 11:54:31.509381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.150 [2024-11-19 11:54:31.509401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:18.150 [2024-11-19 11:54:31.509434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:23:18.150 [2024-11-19 11:54:31.509443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.511947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.150 [2024-11-19 11:54:31.512074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:18.150 [2024-11-19 11:54:31.512135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:23:18.150 [2024-11-19 11:54:31.512157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.512294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.150 [2024-11-19 11:54:31.512318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:18.150 [2024-11-19 11:54:31.512349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:23:18.150 [2024-11-19 11:54:31.512368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.519139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.519288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.150 [2024-11-19 11:54:31.519341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.519363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.519483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.519508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.150 [2024-11-19 11:54:31.519529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.519548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.519605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.519629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.150 [2024-11-19 11:54:31.519654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.519716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.519750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.519770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.150 [2024-11-19 11:54:31.519789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.519808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.534034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.534212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.150 [2024-11-19 11:54:31.534267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.534290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.546241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.546494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.150 [2024-11-19 11:54:31.546588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.546615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.546697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.546722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.150 [2024-11-19 11:54:31.546743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.546826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.546886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.546909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.150 [2024-11-19 11:54:31.546929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.546949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.547116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.547149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.150 [2024-11-19 11:54:31.547170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.547189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.547339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.547367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:18.150 [2024-11-19 11:54:31.547378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.547387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.547739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.547760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.150 [2024-11-19 11:54:31.547770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.547779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.547840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.150 [2024-11-19 11:54:31.547851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.150 [2024-11-19 11:54:31.547860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.150 [2024-11-19 11:54:31.547868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.150 [2024-11-19 11:54:31.548013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.066 ms, result 0 00:23:18.722 00:23:18.722 00:23:18.722 11:54:31 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:21.288 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:21.288 Process with pid 86010 is not found 00:23:21.288 Remove shared memory files 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86010 00:23:21.288 11:54:34 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86010 ']' 00:23:21.288 11:54:34 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86010 00:23:21.288 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86010) - No such process 00:23:21.288 11:54:34 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86010 is not found' 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:21.288 11:54:34 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:21.288 ************************************ 00:23:21.288 END TEST ftl_restore 00:23:21.288 ************************************ 00:23:21.288 00:23:21.288 real 5m17.711s 00:23:21.288 user 5m4.694s 00:23:21.288 sys 0m12.355s 00:23:21.288 11:54:34 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:21.288 11:54:34 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:21.288 11:54:34 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:21.288 11:54:34 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:21.288 11:54:34 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:21.288 11:54:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:21.288 ************************************ 00:23:21.288 START TEST ftl_dirty_shutdown 00:23:21.288 ************************************ 00:23:21.288 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:21.288 * Looking for test storage... 00:23:21.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.289 --rc genhtml_branch_coverage=1 00:23:21.289 --rc genhtml_function_coverage=1 00:23:21.289 --rc genhtml_legend=1 00:23:21.289 --rc geninfo_all_blocks=1 00:23:21.289 --rc geninfo_unexecuted_blocks=1 00:23:21.289 00:23:21.289 ' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.289 --rc genhtml_branch_coverage=1 00:23:21.289 --rc genhtml_function_coverage=1 00:23:21.289 --rc genhtml_legend=1 00:23:21.289 --rc geninfo_all_blocks=1 00:23:21.289 --rc geninfo_unexecuted_blocks=1 00:23:21.289 00:23:21.289 ' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.289 --rc genhtml_branch_coverage=1 00:23:21.289 --rc genhtml_function_coverage=1 00:23:21.289 --rc genhtml_legend=1 00:23:21.289 --rc geninfo_all_blocks=1 00:23:21.289 --rc geninfo_unexecuted_blocks=1 00:23:21.289 00:23:21.289 ' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:21.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.289 --rc genhtml_branch_coverage=1 00:23:21.289 --rc genhtml_function_coverage=1 00:23:21.289 --rc genhtml_legend=1 00:23:21.289 --rc geninfo_all_blocks=1 00:23:21.289 --rc geninfo_unexecuted_blocks=1 00:23:21.289 00:23:21.289 ' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89368 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89368 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89368 ']' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.289 11:54:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:21.289 [2024-11-19 11:54:34.592704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:21.289 [2024-11-19 11:54:34.593084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89368 ] 00:23:21.550 [2024-11-19 11:54:34.728000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.550 [2024-11-19 11:54:34.781492] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:22.123 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:22.383 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:22.384 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:22.645 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:22.645 { 00:23:22.645 "name": "nvme0n1", 00:23:22.645 "aliases": [ 00:23:22.645 "4eb9bacc-7593-4a6f-8512-2886c7465181" 00:23:22.645 ], 00:23:22.645 "product_name": "NVMe disk", 00:23:22.645 "block_size": 4096, 00:23:22.645 "num_blocks": 1310720, 00:23:22.645 "uuid": "4eb9bacc-7593-4a6f-8512-2886c7465181", 00:23:22.645 "numa_id": -1, 00:23:22.645 "assigned_rate_limits": { 00:23:22.645 "rw_ios_per_sec": 0, 00:23:22.645 "rw_mbytes_per_sec": 0, 00:23:22.645 "r_mbytes_per_sec": 0, 00:23:22.645 "w_mbytes_per_sec": 0 00:23:22.645 }, 00:23:22.645 "claimed": true, 00:23:22.645 "claim_type": "read_many_write_one", 00:23:22.645 "zoned": false, 00:23:22.645 "supported_io_types": { 00:23:22.645 "read": true, 00:23:22.645 "write": true, 00:23:22.645 "unmap": true, 00:23:22.645 "flush": true, 00:23:22.645 "reset": true, 00:23:22.645 "nvme_admin": true, 00:23:22.645 "nvme_io": true, 00:23:22.645 "nvme_io_md": false, 00:23:22.645 "write_zeroes": true, 00:23:22.645 "zcopy": false, 00:23:22.645 "get_zone_info": false, 00:23:22.645 "zone_management": false, 00:23:22.645 "zone_append": false, 00:23:22.645 "compare": true, 00:23:22.645 "compare_and_write": false, 00:23:22.645 "abort": true, 00:23:22.645 "seek_hole": false, 00:23:22.645 "seek_data": false, 00:23:22.645 "copy": true, 00:23:22.645 "nvme_iov_md": false 00:23:22.645 }, 00:23:22.645 "driver_specific": { 00:23:22.645 "nvme": [ 00:23:22.645 { 00:23:22.645 "pci_address": "0000:00:11.0", 00:23:22.645 "trid": { 00:23:22.645 "trtype": "PCIe", 00:23:22.645 "traddr": "0000:00:11.0" 00:23:22.645 }, 00:23:22.645 "ctrlr_data": { 00:23:22.645 "cntlid": 0, 00:23:22.645 "vendor_id": "0x1b36", 00:23:22.645 "model_number": "QEMU NVMe Ctrl", 00:23:22.645 "serial_number": "12341", 00:23:22.645 "firmware_revision": "8.0.0", 00:23:22.645 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:22.645 "oacs": { 00:23:22.645 "security": 0, 00:23:22.645 "format": 1, 00:23:22.645 "firmware": 0, 00:23:22.645 "ns_manage": 1 00:23:22.645 }, 00:23:22.645 "multi_ctrlr": false, 00:23:22.645 "ana_reporting": false 00:23:22.645 }, 00:23:22.645 "vs": { 00:23:22.645 "nvme_version": "1.4" 00:23:22.645 }, 00:23:22.645 "ns_data": { 00:23:22.645 "id": 1, 00:23:22.645 "can_share": false 00:23:22.645 } 00:23:22.645 } 00:23:22.645 ], 00:23:22.645 "mp_policy": "active_passive" 00:23:22.645 } 00:23:22.645 } 00:23:22.645 ]' 00:23:22.645 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:22.645 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:22.645 11:54:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:22.645 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:22.907 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7c3a2b44-656c-4297-8b62-1e1b9ba9ca79 00:23:22.907 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:22.907 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7c3a2b44-656c-4297-8b62-1e1b9ba9ca79 00:23:23.168 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:23.429 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=84f28f1b-67c7-4c6f-a896-449088253f0f 00:23:23.429 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 84f28f1b-67c7-4c6f-a896-449088253f0f 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:23.690 11:54:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:23.952 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:23.952 { 00:23:23.952 "name": "efb52477-8ff5-4674-97a1-6e478b3b0e53", 00:23:23.952 "aliases": [ 00:23:23.952 "lvs/nvme0n1p0" 00:23:23.952 ], 00:23:23.952 "product_name": "Logical Volume", 00:23:23.952 "block_size": 4096, 00:23:23.952 "num_blocks": 26476544, 00:23:23.952 "uuid": "efb52477-8ff5-4674-97a1-6e478b3b0e53", 00:23:23.952 "assigned_rate_limits": { 00:23:23.952 "rw_ios_per_sec": 0, 00:23:23.952 "rw_mbytes_per_sec": 0, 00:23:23.952 "r_mbytes_per_sec": 0, 00:23:23.952 "w_mbytes_per_sec": 0 00:23:23.952 }, 00:23:23.952 "claimed": false, 00:23:23.952 "zoned": false, 00:23:23.952 "supported_io_types": { 00:23:23.952 "read": true, 00:23:23.952 "write": true, 00:23:23.952 "unmap": true, 00:23:23.953 "flush": false, 00:23:23.953 "reset": true, 00:23:23.953 "nvme_admin": false, 00:23:23.953 "nvme_io": false, 00:23:23.953 "nvme_io_md": false, 00:23:23.953 "write_zeroes": true, 00:23:23.953 "zcopy": false, 00:23:23.953 "get_zone_info": false, 00:23:23.953 "zone_management": false, 00:23:23.953 "zone_append": false, 00:23:23.953 "compare": false, 00:23:23.953 "compare_and_write": false, 00:23:23.953 "abort": false, 00:23:23.953 "seek_hole": true, 00:23:23.953 "seek_data": true, 00:23:23.953 "copy": false, 00:23:23.953 "nvme_iov_md": false 00:23:23.953 }, 00:23:23.953 "driver_specific": { 00:23:23.953 "lvol": { 00:23:23.953 "lvol_store_uuid": "84f28f1b-67c7-4c6f-a896-449088253f0f", 00:23:23.953 "base_bdev": "nvme0n1", 00:23:23.953 "thin_provision": true, 00:23:23.953 "num_allocated_clusters": 0, 00:23:23.953 "snapshot": false, 00:23:23.953 "clone": false, 00:23:23.953 "esnap_clone": false 00:23:23.953 } 00:23:23.953 } 00:23:23.953 } 00:23:23.953 ]' 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:23.953 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:24.214 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:24.476 { 00:23:24.476 "name": "efb52477-8ff5-4674-97a1-6e478b3b0e53", 00:23:24.476 "aliases": [ 00:23:24.476 "lvs/nvme0n1p0" 00:23:24.476 ], 00:23:24.476 "product_name": "Logical Volume", 00:23:24.476 "block_size": 4096, 00:23:24.476 "num_blocks": 26476544, 00:23:24.476 "uuid": "efb52477-8ff5-4674-97a1-6e478b3b0e53", 00:23:24.476 "assigned_rate_limits": { 00:23:24.476 "rw_ios_per_sec": 0, 00:23:24.476 "rw_mbytes_per_sec": 0, 00:23:24.476 "r_mbytes_per_sec": 0, 00:23:24.476 "w_mbytes_per_sec": 0 00:23:24.476 }, 00:23:24.476 "claimed": false, 00:23:24.476 "zoned": false, 00:23:24.476 "supported_io_types": { 00:23:24.476 "read": true, 00:23:24.476 "write": true, 00:23:24.476 "unmap": true, 00:23:24.476 "flush": false, 00:23:24.476 "reset": true, 00:23:24.476 "nvme_admin": false, 00:23:24.476 "nvme_io": false, 00:23:24.476 "nvme_io_md": false, 00:23:24.476 "write_zeroes": true, 00:23:24.476 "zcopy": false, 00:23:24.476 "get_zone_info": false, 00:23:24.476 "zone_management": false, 00:23:24.476 "zone_append": false, 00:23:24.476 "compare": false, 00:23:24.476 "compare_and_write": false, 00:23:24.476 "abort": false, 00:23:24.476 "seek_hole": true, 00:23:24.476 "seek_data": true, 00:23:24.476 "copy": false, 00:23:24.476 "nvme_iov_md": false 00:23:24.476 }, 00:23:24.476 "driver_specific": { 00:23:24.476 "lvol": { 00:23:24.476 "lvol_store_uuid": "84f28f1b-67c7-4c6f-a896-449088253f0f", 00:23:24.476 "base_bdev": "nvme0n1", 00:23:24.476 "thin_provision": true, 00:23:24.476 "num_allocated_clusters": 0, 00:23:24.476 "snapshot": false, 00:23:24.476 "clone": false, 00:23:24.476 "esnap_clone": false 00:23:24.476 } 00:23:24.476 } 00:23:24.476 } 00:23:24.476 ]' 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:24.476 11:54:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:24.737 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efb52477-8ff5-4674-97a1-6e478b3b0e53 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:24.999 { 00:23:24.999 "name": "efb52477-8ff5-4674-97a1-6e478b3b0e53", 00:23:24.999 "aliases": [ 00:23:24.999 "lvs/nvme0n1p0" 00:23:24.999 ], 00:23:24.999 "product_name": "Logical Volume", 00:23:24.999 "block_size": 4096, 00:23:24.999 "num_blocks": 26476544, 00:23:24.999 "uuid": "efb52477-8ff5-4674-97a1-6e478b3b0e53", 00:23:24.999 "assigned_rate_limits": { 00:23:24.999 "rw_ios_per_sec": 0, 00:23:24.999 "rw_mbytes_per_sec": 0, 00:23:24.999 "r_mbytes_per_sec": 0, 00:23:24.999 "w_mbytes_per_sec": 0 00:23:24.999 }, 00:23:24.999 "claimed": false, 00:23:24.999 "zoned": false, 00:23:24.999 "supported_io_types": { 00:23:24.999 "read": true, 00:23:24.999 "write": true, 00:23:24.999 "unmap": true, 00:23:24.999 "flush": false, 00:23:24.999 "reset": true, 00:23:24.999 "nvme_admin": false, 00:23:24.999 "nvme_io": false, 00:23:24.999 "nvme_io_md": false, 00:23:24.999 "write_zeroes": true, 00:23:24.999 "zcopy": false, 00:23:24.999 "get_zone_info": false, 00:23:24.999 "zone_management": false, 00:23:24.999 "zone_append": false, 00:23:24.999 "compare": false, 00:23:24.999 "compare_and_write": false, 00:23:24.999 "abort": false, 00:23:24.999 "seek_hole": true, 00:23:24.999 "seek_data": true, 00:23:24.999 "copy": false, 00:23:24.999 "nvme_iov_md": false 00:23:24.999 }, 00:23:24.999 "driver_specific": { 00:23:24.999 "lvol": { 00:23:24.999 "lvol_store_uuid": "84f28f1b-67c7-4c6f-a896-449088253f0f", 00:23:24.999 "base_bdev": "nvme0n1", 00:23:24.999 "thin_provision": true, 00:23:24.999 "num_allocated_clusters": 0, 00:23:24.999 "snapshot": false, 00:23:24.999 "clone": false, 00:23:24.999 "esnap_clone": false 00:23:24.999 } 00:23:24.999 } 00:23:24.999 } 00:23:24.999 ]' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d efb52477-8ff5-4674-97a1-6e478b3b0e53 --l2p_dram_limit 10' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:24.999 11:54:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d efb52477-8ff5-4674-97a1-6e478b3b0e53 --l2p_dram_limit 10 -c nvc0n1p0 00:23:25.263 [2024-11-19 11:54:38.491430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.491508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:25.263 [2024-11-19 11:54:38.491524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:25.263 [2024-11-19 11:54:38.491540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.491613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.491627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.263 [2024-11-19 11:54:38.491635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:25.263 [2024-11-19 11:54:38.491652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.491677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:25.263 [2024-11-19 11:54:38.492047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:25.263 [2024-11-19 11:54:38.492065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.492076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.263 [2024-11-19 11:54:38.492088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:23:25.263 [2024-11-19 11:54:38.492103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.492192] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2150e01d-a4f7-4434-84b1-1538dcc0d293 00:23:25.263 [2024-11-19 11:54:38.494137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.494185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:25.263 [2024-11-19 11:54:38.494198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:25.263 [2024-11-19 11:54:38.494206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.504233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.504277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.263 [2024-11-19 11:54:38.504293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.971 ms 00:23:25.263 [2024-11-19 11:54:38.504302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.504401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.504434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.263 [2024-11-19 11:54:38.504445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:25.263 [2024-11-19 11:54:38.504457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.504547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.504563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:25.263 [2024-11-19 11:54:38.504574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:25.263 [2024-11-19 11:54:38.504582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.504613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:25.263 [2024-11-19 11:54:38.506975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.507018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.263 [2024-11-19 11:54:38.507038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:23:25.263 [2024-11-19 11:54:38.507049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.507090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.507102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:25.263 [2024-11-19 11:54:38.507112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:25.263 [2024-11-19 11:54:38.507124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.507144] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:25.263 [2024-11-19 11:54:38.507509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:25.263 [2024-11-19 11:54:38.507527] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:25.263 [2024-11-19 11:54:38.507542] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:25.263 [2024-11-19 11:54:38.507553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:25.263 [2024-11-19 11:54:38.507567] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:25.263 [2024-11-19 11:54:38.507576] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:25.263 [2024-11-19 11:54:38.507589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:25.263 [2024-11-19 11:54:38.507596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:25.263 [2024-11-19 11:54:38.507607] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:25.263 [2024-11-19 11:54:38.507617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.507627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:25.263 [2024-11-19 11:54:38.507634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:23:25.263 [2024-11-19 11:54:38.507644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.507729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.263 [2024-11-19 11:54:38.507741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:25.263 [2024-11-19 11:54:38.507749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:25.263 [2024-11-19 11:54:38.507759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.263 [2024-11-19 11:54:38.507855] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:25.263 [2024-11-19 11:54:38.507869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:25.263 [2024-11-19 11:54:38.507879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.263 [2024-11-19 11:54:38.507888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.263 [2024-11-19 11:54:38.507896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:25.263 [2024-11-19 11:54:38.507905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:25.263 [2024-11-19 11:54:38.507913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:25.263 [2024-11-19 11:54:38.507922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:25.263 [2024-11-19 11:54:38.507928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:25.263 [2024-11-19 11:54:38.507938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.263 [2024-11-19 11:54:38.507945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:25.263 [2024-11-19 11:54:38.507955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:25.263 [2024-11-19 11:54:38.507963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.263 [2024-11-19 11:54:38.507973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:25.263 [2024-11-19 11:54:38.507981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:25.263 [2024-11-19 11:54:38.507991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.263 [2024-11-19 11:54:38.507998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:25.263 [2024-11-19 11:54:38.508007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:25.263 [2024-11-19 11:54:38.508013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.263 [2024-11-19 11:54:38.508022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:25.263 [2024-11-19 11:54:38.508029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:25.263 [2024-11-19 11:54:38.508037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.263 [2024-11-19 11:54:38.508044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:25.263 [2024-11-19 11:54:38.508052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:25.263 [2024-11-19 11:54:38.508059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.263 [2024-11-19 11:54:38.508068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:25.263 [2024-11-19 11:54:38.508074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:25.263 [2024-11-19 11:54:38.508082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.263 [2024-11-19 11:54:38.508090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:25.263 [2024-11-19 11:54:38.508101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:25.263 [2024-11-19 11:54:38.508111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:25.263 [2024-11-19 11:54:38.508120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:25.263 [2024-11-19 11:54:38.508127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:25.264 [2024-11-19 11:54:38.508135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.264 [2024-11-19 11:54:38.508142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:25.264 [2024-11-19 11:54:38.508152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:25.264 [2024-11-19 11:54:38.508159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.264 [2024-11-19 11:54:38.508168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:25.264 [2024-11-19 11:54:38.508175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:25.264 [2024-11-19 11:54:38.508184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.264 [2024-11-19 11:54:38.508190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:25.264 [2024-11-19 11:54:38.508199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:25.264 [2024-11-19 11:54:38.508206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.264 [2024-11-19 11:54:38.508214] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:25.264 [2024-11-19 11:54:38.508229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:25.264 [2024-11-19 11:54:38.508241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.264 [2024-11-19 11:54:38.508248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.264 [2024-11-19 11:54:38.508258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:25.264 [2024-11-19 11:54:38.508265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:25.264 [2024-11-19 11:54:38.508274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:25.264 [2024-11-19 11:54:38.508281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:25.264 [2024-11-19 11:54:38.508290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:25.264 [2024-11-19 11:54:38.508296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:25.264 [2024-11-19 11:54:38.508309] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:25.264 [2024-11-19 11:54:38.508319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:25.264 [2024-11-19 11:54:38.508337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:25.264 [2024-11-19 11:54:38.508347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:25.264 [2024-11-19 11:54:38.508354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:25.264 [2024-11-19 11:54:38.508364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:25.264 [2024-11-19 11:54:38.508371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:25.264 [2024-11-19 11:54:38.508382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:25.264 [2024-11-19 11:54:38.508391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:25.264 [2024-11-19 11:54:38.508400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:25.264 [2024-11-19 11:54:38.508421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:25.264 [2024-11-19 11:54:38.508465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:25.264 [2024-11-19 11:54:38.508476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:25.264 [2024-11-19 11:54:38.508493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:25.264 [2024-11-19 11:54:38.508502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:25.264 [2024-11-19 11:54:38.508510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:25.264 [2024-11-19 11:54:38.508519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.264 [2024-11-19 11:54:38.508533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:25.264 [2024-11-19 11:54:38.508545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:23:25.264 [2024-11-19 11:54:38.508552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.264 [2024-11-19 11:54:38.508621] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:25.264 [2024-11-19 11:54:38.508633] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:29.509 [2024-11-19 11:54:42.469974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.509 [2024-11-19 11:54:42.470048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:29.509 [2024-11-19 11:54:42.470070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3961.349 ms 00:23:29.509 [2024-11-19 11:54:42.470079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.509 [2024-11-19 11:54:42.480206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.509 [2024-11-19 11:54:42.480251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.509 [2024-11-19 11:54:42.480265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.027 ms 00:23:29.509 [2024-11-19 11:54:42.480272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.509 [2024-11-19 11:54:42.480389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.509 [2024-11-19 11:54:42.480399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:29.509 [2024-11-19 11:54:42.480442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:29.509 [2024-11-19 11:54:42.480450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.509 [2024-11-19 11:54:42.489536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.509 [2024-11-19 11:54:42.489707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.509 [2024-11-19 11:54:42.489727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.043 ms 00:23:29.509 [2024-11-19 11:54:42.489734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.509 [2024-11-19 11:54:42.489769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.509 [2024-11-19 11:54:42.489777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.510 [2024-11-19 11:54:42.489790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:29.510 [2024-11-19 11:54:42.489797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.490183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.490200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.510 [2024-11-19 11:54:42.490212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:23:29.510 [2024-11-19 11:54:42.490219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.490339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.490352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.510 [2024-11-19 11:54:42.490364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:29.510 [2024-11-19 11:54:42.490375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.507692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.507926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.510 [2024-11-19 11:54:42.507964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.288 ms 00:23:29.510 [2024-11-19 11:54:42.507979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.518381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:29.510 [2024-11-19 11:54:42.521369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.521422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:29.510 [2024-11-19 11:54:42.521433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.180 ms 00:23:29.510 [2024-11-19 11:54:42.521443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.589619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.589821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:29.510 [2024-11-19 11:54:42.589839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.145 ms 00:23:29.510 [2024-11-19 11:54:42.589852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.590031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.590043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:29.510 [2024-11-19 11:54:42.590052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:23:29.510 [2024-11-19 11:54:42.590062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.593736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.593774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:29.510 [2024-11-19 11:54:42.593785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:23:29.510 [2024-11-19 11:54:42.593794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.597114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.597240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:29.510 [2024-11-19 11:54:42.597255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:23:29.510 [2024-11-19 11:54:42.597263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.597583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.597602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:29.510 [2024-11-19 11:54:42.597611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:23:29.510 [2024-11-19 11:54:42.597622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.628144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.628277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:29.510 [2024-11-19 11:54:42.628298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.503 ms 00:23:29.510 [2024-11-19 11:54:42.628308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.632896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.632935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:29.510 [2024-11-19 11:54:42.632945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:23:29.510 [2024-11-19 11:54:42.632954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.636617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.636658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:29.510 [2024-11-19 11:54:42.636667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:23:29.510 [2024-11-19 11:54:42.636676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.641110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.641147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:29.510 [2024-11-19 11:54:42.641157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:23:29.510 [2024-11-19 11:54:42.641170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.641208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.641219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:29.510 [2024-11-19 11:54:42.641228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:29.510 [2024-11-19 11:54:42.641238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.641301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.510 [2024-11-19 11:54:42.641312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:29.510 [2024-11-19 11:54:42.641321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:29.510 [2024-11-19 11:54:42.641331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.510 [2024-11-19 11:54:42.642184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4150.407 ms, result 0 00:23:29.510 { 00:23:29.510 "name": "ftl0", 00:23:29.510 "uuid": "2150e01d-a4f7-4434-84b1-1538dcc0d293" 00:23:29.510 } 00:23:29.510 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:29.510 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:29.510 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:29.510 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:29.510 11:54:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:29.770 /dev/nbd0 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:29.770 1+0 records in 00:23:29.770 1+0 records out 00:23:29.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255632 s, 16.0 MB/s 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:29.770 11:54:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:29.770 [2024-11-19 11:54:43.158968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:29.770 [2024-11-19 11:54:43.159103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89510 ] 00:23:30.028 [2024-11-19 11:54:43.293256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.028 [2024-11-19 11:54:43.326869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.965  [2024-11-19T11:54:45.759Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-19T11:54:46.692Z] Copying: 384/1024 [MB] (189 MBps) [2024-11-19T11:54:47.626Z] Copying: 576/1024 [MB] (191 MBps) [2024-11-19T11:54:48.559Z] Copying: 772/1024 [MB] (196 MBps) [2024-11-19T11:54:48.559Z] Copying: 987/1024 [MB] (214 MBps) [2024-11-19T11:54:48.818Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:23:35.406 00:23:35.406 11:54:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:37.940 11:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:37.940 [2024-11-19 11:54:50.855308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:37.940 [2024-11-19 11:54:50.855448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89599 ] 00:23:37.940 [2024-11-19 11:54:50.991324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.940 [2024-11-19 11:54:51.023901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.876  [2024-11-19T11:54:53.225Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-19T11:54:54.170Z] Copying: 57/1024 [MB] (27 MBps) [2024-11-19T11:54:55.106Z] Copying: 87/1024 [MB] (30 MBps) [2024-11-19T11:54:56.525Z] Copying: 123/1024 [MB] (35 MBps) [2024-11-19T11:54:57.092Z] Copying: 154/1024 [MB] (31 MBps) [2024-11-19T11:54:58.467Z] Copying: 185/1024 [MB] (30 MBps) [2024-11-19T11:54:59.400Z] Copying: 216/1024 [MB] (30 MBps) [2024-11-19T11:55:00.333Z] Copying: 247/1024 [MB] (30 MBps) [2024-11-19T11:55:01.323Z] Copying: 277/1024 [MB] (30 MBps) [2024-11-19T11:55:02.258Z] Copying: 308/1024 [MB] (31 MBps) [2024-11-19T11:55:03.192Z] Copying: 340/1024 [MB] (31 MBps) [2024-11-19T11:55:04.127Z] Copying: 371/1024 [MB] (31 MBps) [2024-11-19T11:55:05.494Z] Copying: 403/1024 [MB] (31 MBps) [2024-11-19T11:55:06.428Z] Copying: 433/1024 [MB] (30 MBps) [2024-11-19T11:55:07.361Z] Copying: 465/1024 [MB] (32 MBps) [2024-11-19T11:55:08.293Z] Copying: 495/1024 [MB] (30 MBps) [2024-11-19T11:55:09.226Z] Copying: 518/1024 [MB] (22 MBps) [2024-11-19T11:55:10.184Z] Copying: 549/1024 [MB] (30 MBps) [2024-11-19T11:55:11.119Z] Copying: 581/1024 [MB] (31 MBps) [2024-11-19T11:55:12.494Z] Copying: 611/1024 [MB] (30 MBps) [2024-11-19T11:55:13.430Z] Copying: 642/1024 [MB] (31 MBps) [2024-11-19T11:55:14.363Z] Copying: 673/1024 [MB] (30 MBps) [2024-11-19T11:55:15.297Z] Copying: 705/1024 [MB] (31 MBps) [2024-11-19T11:55:16.230Z] Copying: 736/1024 [MB] (31 MBps) [2024-11-19T11:55:17.163Z] Copying: 766/1024 [MB] (30 MBps) [2024-11-19T11:55:18.096Z] Copying: 796/1024 [MB] (29 MBps) [2024-11-19T11:55:19.469Z] Copying: 826/1024 [MB] (29 MBps) [2024-11-19T11:55:20.402Z] Copying: 858/1024 [MB] (32 MBps) [2024-11-19T11:55:21.335Z] Copying: 894/1024 [MB] (36 MBps) [2024-11-19T11:55:22.269Z] Copying: 925/1024 [MB] (31 MBps) [2024-11-19T11:55:23.202Z] Copying: 960/1024 [MB] (34 MBps) [2024-11-19T11:55:24.134Z] Copying: 991/1024 [MB] (31 MBps) [2024-11-19T11:55:24.392Z] Copying: 1019/1024 [MB] (27 MBps) [2024-11-19T11:55:24.392Z] Copying: 1024/1024 [MB] (average 30 MBps) 00:24:10.980 00:24:10.980 11:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:10.980 11:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:11.237 11:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:11.497 [2024-11-19 11:55:24.766714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.766767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:11.497 [2024-11-19 11:55:24.766782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.497 [2024-11-19 11:55:24.766791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.766817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:11.497 [2024-11-19 11:55:24.767263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.767286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:11.497 [2024-11-19 11:55:24.767295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:24:11.497 [2024-11-19 11:55:24.767309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.768871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.768909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:11.497 [2024-11-19 11:55:24.768920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:24:11.497 [2024-11-19 11:55:24.768930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.782307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.782356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:11.497 [2024-11-19 11:55:24.782367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.357 ms 00:24:11.497 [2024-11-19 11:55:24.782382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.788581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.788620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:11.497 [2024-11-19 11:55:24.788630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.146 ms 00:24:11.497 [2024-11-19 11:55:24.788640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.790048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.790089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:11.497 [2024-11-19 11:55:24.790098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:24:11.497 [2024-11-19 11:55:24.790107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.793979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.794021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:11.497 [2024-11-19 11:55:24.794032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.840 ms 00:24:11.497 [2024-11-19 11:55:24.794044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.794166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.794177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:11.497 [2024-11-19 11:55:24.794186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:11.497 [2024-11-19 11:55:24.794199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.795996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.796142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:11.497 [2024-11-19 11:55:24.796157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:24:11.497 [2024-11-19 11:55:24.796167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.797306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.797346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:11.497 [2024-11-19 11:55:24.797354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.106 ms 00:24:11.497 [2024-11-19 11:55:24.797363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.798352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.798387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:11.497 [2024-11-19 11:55:24.798395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:24:11.497 [2024-11-19 11:55:24.798404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.799349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.497 [2024-11-19 11:55:24.799490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:11.497 [2024-11-19 11:55:24.799504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:24:11.497 [2024-11-19 11:55:24.799513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.497 [2024-11-19 11:55:24.799543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:11.497 [2024-11-19 11:55:24.799559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:11.497 [2024-11-19 11:55:24.799573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:11.497 [2024-11-19 11:55:24.799582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.799993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:11.498 [2024-11-19 11:55:24.800318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:11.499 [2024-11-19 11:55:24.800403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:11.499 [2024-11-19 11:55:24.800427] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2150e01d-a4f7-4434-84b1-1538dcc0d293 00:24:11.499 [2024-11-19 11:55:24.800437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:11.499 [2024-11-19 11:55:24.800447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:11.499 [2024-11-19 11:55:24.800474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:11.499 [2024-11-19 11:55:24.800482] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:11.499 [2024-11-19 11:55:24.800490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:11.499 [2024-11-19 11:55:24.800506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:11.499 [2024-11-19 11:55:24.800516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:11.499 [2024-11-19 11:55:24.800522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:11.499 [2024-11-19 11:55:24.800529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:11.499 [2024-11-19 11:55:24.800536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.499 [2024-11-19 11:55:24.800545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:11.499 [2024-11-19 11:55:24.800556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:24:11.499 [2024-11-19 11:55:24.800565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.801964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.499 [2024-11-19 11:55:24.801991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:11.499 [2024-11-19 11:55:24.802000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:24:11.499 [2024-11-19 11:55:24.802009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.802085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.499 [2024-11-19 11:55:24.802095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:11.499 [2024-11-19 11:55:24.802103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:11.499 [2024-11-19 11:55:24.802111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.807170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.807330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.499 [2024-11-19 11:55:24.807346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.807356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.807433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.807444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.499 [2024-11-19 11:55:24.807452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.807462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.807521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.807538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.499 [2024-11-19 11:55:24.807545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.807554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.807572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.807581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.499 [2024-11-19 11:55:24.807588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.807597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.816471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.816520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.499 [2024-11-19 11:55:24.816532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.816543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.823894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.823949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.499 [2024-11-19 11:55:24.823959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.823969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.824075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.824098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.499 [2024-11-19 11:55:24.824107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.824117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.824152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.824166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.499 [2024-11-19 11:55:24.824174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.824186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.824248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.824265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.499 [2024-11-19 11:55:24.824273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.824281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.824310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.824320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:11.499 [2024-11-19 11:55:24.824327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.824335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.824371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.824382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.499 [2024-11-19 11:55:24.824392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.824401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.824678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:11.499 [2024-11-19 11:55:24.824720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.499 [2024-11-19 11:55:24.824744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:11.499 [2024-11-19 11:55:24.824766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.499 [2024-11-19 11:55:24.825002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 58.250 ms, result 0 00:24:11.499 true 00:24:11.499 11:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89368 00:24:11.499 11:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89368 00:24:11.499 11:55:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:11.757 [2024-11-19 11:55:24.908856] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:11.757 [2024-11-19 11:55:24.909108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89957 ] 00:24:11.757 [2024-11-19 11:55:25.041915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.757 [2024-11-19 11:55:25.074297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.795  [2024-11-19T11:55:27.141Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-19T11:55:28.514Z] Copying: 436/1024 [MB] (241 MBps) [2024-11-19T11:55:29.448Z] Copying: 694/1024 [MB] (258 MBps) [2024-11-19T11:55:29.448Z] Copying: 948/1024 [MB] (253 MBps) [2024-11-19T11:55:29.706Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:24:16.294 00:24:16.294 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89368 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:16.294 11:55:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:16.294 [2024-11-19 11:55:29.633401] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:16.294 [2024-11-19 11:55:29.633523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90015 ] 00:24:16.552 [2024-11-19 11:55:29.763626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.552 [2024-11-19 11:55:29.796765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.552 [2024-11-19 11:55:29.882092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.552 [2024-11-19 11:55:29.882161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:16.552 [2024-11-19 11:55:29.943929] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:16.552 [2024-11-19 11:55:29.944233] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:16.552 [2024-11-19 11:55:29.944433] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:16.812 [2024-11-19 11:55:30.122835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.123029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:16.812 [2024-11-19 11:55:30.123052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:16.812 [2024-11-19 11:55:30.123065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.123123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.123132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.812 [2024-11-19 11:55:30.123141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:16.812 [2024-11-19 11:55:30.123147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.123170] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:16.812 [2024-11-19 11:55:30.123396] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:16.812 [2024-11-19 11:55:30.123427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.123437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.812 [2024-11-19 11:55:30.123445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:24:16.812 [2024-11-19 11:55:30.123453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.124449] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:16.812 [2024-11-19 11:55:30.126666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.126692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:16.812 [2024-11-19 11:55:30.126700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:24:16.812 [2024-11-19 11:55:30.126706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.126753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.126761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:16.812 [2024-11-19 11:55:30.126768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:16.812 [2024-11-19 11:55:30.126773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.131395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.131428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.812 [2024-11-19 11:55:30.131441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:24:16.812 [2024-11-19 11:55:30.131448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.131526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.131534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.812 [2024-11-19 11:55:30.131541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:16.812 [2024-11-19 11:55:30.131547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.131587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.131597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:16.812 [2024-11-19 11:55:30.131612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:16.812 [2024-11-19 11:55:30.131619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.131638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.812 [2024-11-19 11:55:30.132884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.132902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.812 [2024-11-19 11:55:30.132909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:24:16.812 [2024-11-19 11:55:30.132916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.132947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.132958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:16.812 [2024-11-19 11:55:30.132965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:16.812 [2024-11-19 11:55:30.132971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.132988] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:16.812 [2024-11-19 11:55:30.133003] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:16.812 [2024-11-19 11:55:30.133036] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:16.812 [2024-11-19 11:55:30.133048] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:16.812 [2024-11-19 11:55:30.133135] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:16.812 [2024-11-19 11:55:30.133143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:16.812 [2024-11-19 11:55:30.133151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:16.812 [2024-11-19 11:55:30.133159] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:16.812 [2024-11-19 11:55:30.133167] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:16.812 [2024-11-19 11:55:30.133173] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:16.812 [2024-11-19 11:55:30.133179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:16.812 [2024-11-19 11:55:30.133185] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:16.812 [2024-11-19 11:55:30.133190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:16.812 [2024-11-19 11:55:30.133196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.133206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:16.812 [2024-11-19 11:55:30.133212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:24:16.812 [2024-11-19 11:55:30.133218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.133283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.812 [2024-11-19 11:55:30.133289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:16.812 [2024-11-19 11:55:30.133296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:16.812 [2024-11-19 11:55:30.133302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.812 [2024-11-19 11:55:30.133388] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:16.812 [2024-11-19 11:55:30.133400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:16.812 [2024-11-19 11:55:30.133435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.812 [2024-11-19 11:55:30.133445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.812 [2024-11-19 11:55:30.133452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:16.812 [2024-11-19 11:55:30.133457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:16.812 [2024-11-19 11:55:30.133463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:16.812 [2024-11-19 11:55:30.133469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:16.812 [2024-11-19 11:55:30.133474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:16.812 [2024-11-19 11:55:30.133479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.812 [2024-11-19 11:55:30.133484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:16.812 [2024-11-19 11:55:30.133489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:16.812 [2024-11-19 11:55:30.133495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:16.812 [2024-11-19 11:55:30.133500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:16.812 [2024-11-19 11:55:30.133509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:16.812 [2024-11-19 11:55:30.133515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:16.813 [2024-11-19 11:55:30.133525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:16.813 [2024-11-19 11:55:30.133542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:16.813 [2024-11-19 11:55:30.133559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:16.813 [2024-11-19 11:55:30.133576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:16.813 [2024-11-19 11:55:30.133593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:16.813 [2024-11-19 11:55:30.133615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.813 [2024-11-19 11:55:30.133633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:16.813 [2024-11-19 11:55:30.133640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:16.813 [2024-11-19 11:55:30.133645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:16.813 [2024-11-19 11:55:30.133651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:16.813 [2024-11-19 11:55:30.133657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:16.813 [2024-11-19 11:55:30.133663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:16.813 [2024-11-19 11:55:30.133675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:16.813 [2024-11-19 11:55:30.133680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133686] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:16.813 [2024-11-19 11:55:30.133692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:16.813 [2024-11-19 11:55:30.133698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:16.813 [2024-11-19 11:55:30.133715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:16.813 [2024-11-19 11:55:30.133721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:16.813 [2024-11-19 11:55:30.133727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:16.813 [2024-11-19 11:55:30.133733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:16.813 [2024-11-19 11:55:30.133739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:16.813 [2024-11-19 11:55:30.133745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:16.813 [2024-11-19 11:55:30.133752] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:16.813 [2024-11-19 11:55:30.133760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:16.813 [2024-11-19 11:55:30.133774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:16.813 [2024-11-19 11:55:30.133780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:16.813 [2024-11-19 11:55:30.133787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:16.813 [2024-11-19 11:55:30.133793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:16.813 [2024-11-19 11:55:30.133804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:16.813 [2024-11-19 11:55:30.133810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:16.813 [2024-11-19 11:55:30.133818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:16.813 [2024-11-19 11:55:30.133824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:16.813 [2024-11-19 11:55:30.133830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:16.813 [2024-11-19 11:55:30.133866] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:16.813 [2024-11-19 11:55:30.133873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:16.813 [2024-11-19 11:55:30.133887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:16.813 [2024-11-19 11:55:30.133894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:16.813 [2024-11-19 11:55:30.133900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:16.813 [2024-11-19 11:55:30.133907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.133915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:16.813 [2024-11-19 11:55:30.133922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:24:16.813 [2024-11-19 11:55:30.133930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.152467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.152659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.813 [2024-11-19 11:55:30.152685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.496 ms 00:24:16.813 [2024-11-19 11:55:30.152694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.152802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.152811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:16.813 [2024-11-19 11:55:30.152822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:16.813 [2024-11-19 11:55:30.152830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.161075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.161115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.813 [2024-11-19 11:55:30.161127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.170 ms 00:24:16.813 [2024-11-19 11:55:30.161139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.161193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.161206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.813 [2024-11-19 11:55:30.161216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:16.813 [2024-11-19 11:55:30.161224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.161596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.161613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.813 [2024-11-19 11:55:30.161631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:24:16.813 [2024-11-19 11:55:30.161640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.161777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.813 [2024-11-19 11:55:30.161787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.813 [2024-11-19 11:55:30.161799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:24:16.813 [2024-11-19 11:55:30.161812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.813 [2024-11-19 11:55:30.166518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.166543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.814 [2024-11-19 11:55:30.166554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:24:16.814 [2024-11-19 11:55:30.166563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.168963] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:16.814 [2024-11-19 11:55:30.168991] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:16.814 [2024-11-19 11:55:30.169003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.169011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:16.814 [2024-11-19 11:55:30.169023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.319 ms 00:24:16.814 [2024-11-19 11:55:30.169031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.180352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.180392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:16.814 [2024-11-19 11:55:30.180402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 00:24:16.814 [2024-11-19 11:55:30.180425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.182461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.182485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:16.814 [2024-11-19 11:55:30.182493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:24:16.814 [2024-11-19 11:55:30.182499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.183735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.183811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:16.814 [2024-11-19 11:55:30.183864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:24:16.814 [2024-11-19 11:55:30.183882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.184154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.184215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:16.814 [2024-11-19 11:55:30.184274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:24:16.814 [2024-11-19 11:55:30.184292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.198395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.198585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:16.814 [2024-11-19 11:55:30.198653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.074 ms 00:24:16.814 [2024-11-19 11:55:30.198674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.204728] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:16.814 [2024-11-19 11:55:30.207199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.207297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:16.814 [2024-11-19 11:55:30.207339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.477 ms 00:24:16.814 [2024-11-19 11:55:30.207357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.207447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.207503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:16.814 [2024-11-19 11:55:30.207527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:16.814 [2024-11-19 11:55:30.207544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.207624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.207674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:16.814 [2024-11-19 11:55:30.207711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:16.814 [2024-11-19 11:55:30.207726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.207756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.207773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:16.814 [2024-11-19 11:55:30.207788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:16.814 [2024-11-19 11:55:30.207844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.207876] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:16.814 [2024-11-19 11:55:30.207886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.207892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:16.814 [2024-11-19 11:55:30.207899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:16.814 [2024-11-19 11:55:30.207904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.211037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.211061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.814 [2024-11-19 11:55:30.211070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:24:16.814 [2024-11-19 11:55:30.211077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.211138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.814 [2024-11-19 11:55:30.211148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.814 [2024-11-19 11:55:30.211158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:16.814 [2024-11-19 11:55:30.211165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.814 [2024-11-19 11:55:30.211948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.797 ms, result 0 00:24:18.188  [2024-11-19T11:55:32.534Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-19T11:55:33.467Z] Copying: 92/1024 [MB] (45 MBps) [2024-11-19T11:55:34.403Z] Copying: 141/1024 [MB] (49 MBps) [2024-11-19T11:55:35.338Z] Copying: 185/1024 [MB] (44 MBps) [2024-11-19T11:55:36.273Z] Copying: 229/1024 [MB] (44 MBps) [2024-11-19T11:55:37.649Z] Copying: 273/1024 [MB] (43 MBps) [2024-11-19T11:55:38.584Z] Copying: 322/1024 [MB] (48 MBps) [2024-11-19T11:55:39.519Z] Copying: 369/1024 [MB] (47 MBps) [2024-11-19T11:55:40.489Z] Copying: 414/1024 [MB] (45 MBps) [2024-11-19T11:55:41.424Z] Copying: 460/1024 [MB] (46 MBps) [2024-11-19T11:55:42.363Z] Copying: 506/1024 [MB] (45 MBps) [2024-11-19T11:55:43.299Z] Copying: 549/1024 [MB] (42 MBps) [2024-11-19T11:55:44.239Z] Copying: 587/1024 [MB] (38 MBps) [2024-11-19T11:55:45.619Z] Copying: 627/1024 [MB] (40 MBps) [2024-11-19T11:55:46.554Z] Copying: 670/1024 [MB] (42 MBps) [2024-11-19T11:55:47.488Z] Copying: 701/1024 [MB] (31 MBps) [2024-11-19T11:55:48.423Z] Copying: 712/1024 [MB] (11 MBps) [2024-11-19T11:55:49.357Z] Copying: 727/1024 [MB] (14 MBps) [2024-11-19T11:55:50.292Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-19T11:55:51.228Z] Copying: 750/1024 [MB] (11 MBps) [2024-11-19T11:55:52.604Z] Copying: 761/1024 [MB] (11 MBps) [2024-11-19T11:55:53.539Z] Copying: 772/1024 [MB] (11 MBps) [2024-11-19T11:55:54.520Z] Copying: 784/1024 [MB] (11 MBps) [2024-11-19T11:55:55.455Z] Copying: 795/1024 [MB] (11 MBps) [2024-11-19T11:55:56.391Z] Copying: 807/1024 [MB] (11 MBps) [2024-11-19T11:55:57.330Z] Copying: 820/1024 [MB] (13 MBps) [2024-11-19T11:55:58.268Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-19T11:55:59.646Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-19T11:56:00.578Z] Copying: 855/1024 [MB] (12 MBps) [2024-11-19T11:56:01.513Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-19T11:56:02.448Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-19T11:56:03.381Z] Copying: 886/1024 [MB] (10 MBps) [2024-11-19T11:56:04.320Z] Copying: 905/1024 [MB] (19 MBps) [2024-11-19T11:56:05.256Z] Copying: 942/1024 [MB] (36 MBps) [2024-11-19T11:56:06.639Z] Copying: 963/1024 [MB] (20 MBps) [2024-11-19T11:56:07.583Z] Copying: 1007/1024 [MB] (44 MBps) [2024-11-19T11:56:07.846Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-19T11:56:07.846Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-19 11:56:07.717642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.717708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:54.434 [2024-11-19 11:56:07.717722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:54.434 [2024-11-19 11:56:07.717731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.720599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:54.434 [2024-11-19 11:56:07.722167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.722206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:54.434 [2024-11-19 11:56:07.722218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:24:54.434 [2024-11-19 11:56:07.722225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.734037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.734087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:54.434 [2024-11-19 11:56:07.734098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.783 ms 00:24:54.434 [2024-11-19 11:56:07.734107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.752200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.752431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:54.434 [2024-11-19 11:56:07.752461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.074 ms 00:24:54.434 [2024-11-19 11:56:07.752470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.758680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.758721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:54.434 [2024-11-19 11:56:07.758732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:24:54.434 [2024-11-19 11:56:07.758740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.759912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.760033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:54.434 [2024-11-19 11:56:07.760047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:24:54.434 [2024-11-19 11:56:07.760054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.763393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.763544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:54.434 [2024-11-19 11:56:07.763578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.310 ms 00:24:54.434 [2024-11-19 11:56:07.763631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.814779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.814977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:54.434 [2024-11-19 11:56:07.815036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.097 ms 00:24:54.434 [2024-11-19 11:56:07.815060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.816822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.816932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:54.434 [2024-11-19 11:56:07.816983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:24:54.434 [2024-11-19 11:56:07.817005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.818032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.818129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:54.434 [2024-11-19 11:56:07.818179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:24:54.434 [2024-11-19 11:56:07.818203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.819058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.819155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:54.434 [2024-11-19 11:56:07.819212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:24:54.434 [2024-11-19 11:56:07.819234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.820027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.434 [2024-11-19 11:56:07.820120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:54.434 [2024-11-19 11:56:07.820167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:24:54.434 [2024-11-19 11:56:07.820188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.434 [2024-11-19 11:56:07.820224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:54.434 [2024-11-19 11:56:07.820325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129792 / 261120 wr_cnt: 1 state: open 00:24:54.434 [2024-11-19 11:56:07.820360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.820983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:54.434 [2024-11-19 11:56:07.821490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.821966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.822971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:54.435 [2024-11-19 11:56:07.823903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:54.435 [2024-11-19 11:56:07.823916] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2150e01d-a4f7-4434-84b1-1538dcc0d293 00:24:54.435 [2024-11-19 11:56:07.823927] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129792 00:24:54.435 [2024-11-19 11:56:07.823934] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130752 00:24:54.435 [2024-11-19 11:56:07.823941] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129792 00:24:54.435 [2024-11-19 11:56:07.823954] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:54.435 [2024-11-19 11:56:07.823961] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:54.435 [2024-11-19 11:56:07.823968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:54.435 [2024-11-19 11:56:07.823979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:54.435 [2024-11-19 11:56:07.823986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:54.435 [2024-11-19 11:56:07.823992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:54.435 [2024-11-19 11:56:07.824001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.435 [2024-11-19 11:56:07.824011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:54.435 [2024-11-19 11:56:07.824020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:24:54.435 [2024-11-19 11:56:07.824027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.435 [2024-11-19 11:56:07.825532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.435 [2024-11-19 11:56:07.825618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:54.435 [2024-11-19 11:56:07.825664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:24:54.435 [2024-11-19 11:56:07.825686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.435 [2024-11-19 11:56:07.825791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.435 [2024-11-19 11:56:07.825816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:54.435 [2024-11-19 11:56:07.825873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:54.435 [2024-11-19 11:56:07.825894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.435 [2024-11-19 11:56:07.830188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.435 [2024-11-19 11:56:07.830316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.435 [2024-11-19 11:56:07.830366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.435 [2024-11-19 11:56:07.830388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.435 [2024-11-19 11:56:07.830517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.436 [2024-11-19 11:56:07.830548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.436 [2024-11-19 11:56:07.830594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.436 [2024-11-19 11:56:07.830615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.436 [2024-11-19 11:56:07.830672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.436 [2024-11-19 11:56:07.830721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.436 [2024-11-19 11:56:07.830744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.436 [2024-11-19 11:56:07.830763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.436 [2024-11-19 11:56:07.830813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.436 [2024-11-19 11:56:07.830841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.436 [2024-11-19 11:56:07.830890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.436 [2024-11-19 11:56:07.830903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.436 [2024-11-19 11:56:07.839701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.436 [2024-11-19 11:56:07.839868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.436 [2024-11-19 11:56:07.839918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.436 [2024-11-19 11:56:07.839962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.846786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.846976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.696 [2024-11-19 11:56:07.847040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.847063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.847165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.847202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.696 [2024-11-19 11:56:07.847274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.847296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.847333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.847353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.696 [2024-11-19 11:56:07.847372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.847390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.847599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.847632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.696 [2024-11-19 11:56:07.847653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.847723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.847777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.847800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:54.696 [2024-11-19 11:56:07.847819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.847882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.847936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.847959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.696 [2024-11-19 11:56:07.847978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.847996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.848077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:54.696 [2024-11-19 11:56:07.848103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.696 [2024-11-19 11:56:07.848123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:54.696 [2024-11-19 11:56:07.848142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.696 [2024-11-19 11:56:07.848279] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 133.707 ms, result 0 00:24:55.641 00:24:55.641 00:24:55.641 11:56:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:58.192 11:56:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:58.192 [2024-11-19 11:56:11.177008] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:58.192 [2024-11-19 11:56:11.177369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90435 ] 00:24:58.192 [2024-11-19 11:56:11.313242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.192 [2024-11-19 11:56:11.346318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.192 [2024-11-19 11:56:11.432585] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.192 [2024-11-19 11:56:11.432658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:58.192 [2024-11-19 11:56:11.585388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.585469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:58.192 [2024-11-19 11:56:11.585486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:58.192 [2024-11-19 11:56:11.585494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.192 [2024-11-19 11:56:11.585545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.585556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.192 [2024-11-19 11:56:11.585564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:58.192 [2024-11-19 11:56:11.585577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.192 [2024-11-19 11:56:11.585597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:58.192 [2024-11-19 11:56:11.585883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:58.192 [2024-11-19 11:56:11.585902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.585910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.192 [2024-11-19 11:56:11.585922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:24:58.192 [2024-11-19 11:56:11.585929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.192 [2024-11-19 11:56:11.587031] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:58.192 [2024-11-19 11:56:11.589348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.589514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:58.192 [2024-11-19 11:56:11.589532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.318 ms 00:24:58.192 [2024-11-19 11:56:11.589545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.192 [2024-11-19 11:56:11.589603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.589613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:58.192 [2024-11-19 11:56:11.589621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:58.192 [2024-11-19 11:56:11.589628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.192 [2024-11-19 11:56:11.594421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.594464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.192 [2024-11-19 11:56:11.594477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:24:58.192 [2024-11-19 11:56:11.594490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.192 [2024-11-19 11:56:11.594585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.192 [2024-11-19 11:56:11.594600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.192 [2024-11-19 11:56:11.594608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:58.193 [2024-11-19 11:56:11.594615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.193 [2024-11-19 11:56:11.594655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.193 [2024-11-19 11:56:11.594663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:58.193 [2024-11-19 11:56:11.594671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:58.193 [2024-11-19 11:56:11.594678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.193 [2024-11-19 11:56:11.594709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:58.193 [2024-11-19 11:56:11.596066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.193 [2024-11-19 11:56:11.596098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.193 [2024-11-19 11:56:11.596107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:24:58.193 [2024-11-19 11:56:11.596114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.193 [2024-11-19 11:56:11.596145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.193 [2024-11-19 11:56:11.596154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:58.193 [2024-11-19 11:56:11.596161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:58.193 [2024-11-19 11:56:11.596169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.193 [2024-11-19 11:56:11.596205] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:58.193 [2024-11-19 11:56:11.596222] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:58.193 [2024-11-19 11:56:11.596262] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:58.193 [2024-11-19 11:56:11.596288] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:58.193 [2024-11-19 11:56:11.596391] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:58.193 [2024-11-19 11:56:11.596403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:58.193 [2024-11-19 11:56:11.596441] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:58.193 [2024-11-19 11:56:11.596458] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596476] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596489] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:58.193 [2024-11-19 11:56:11.596501] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:58.193 [2024-11-19 11:56:11.596509] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:58.193 [2024-11-19 11:56:11.596516] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:58.193 [2024-11-19 11:56:11.596524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.193 [2024-11-19 11:56:11.596531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:58.193 [2024-11-19 11:56:11.596539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:24:58.193 [2024-11-19 11:56:11.596546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.193 [2024-11-19 11:56:11.596634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.193 [2024-11-19 11:56:11.596645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:58.193 [2024-11-19 11:56:11.596652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:58.193 [2024-11-19 11:56:11.596659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.193 [2024-11-19 11:56:11.596761] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:58.193 [2024-11-19 11:56:11.596809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:58.193 [2024-11-19 11:56:11.596821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:58.193 [2024-11-19 11:56:11.596845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:58.193 [2024-11-19 11:56:11.596869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.193 [2024-11-19 11:56:11.596883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:58.193 [2024-11-19 11:56:11.596891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:58.193 [2024-11-19 11:56:11.596898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:58.193 [2024-11-19 11:56:11.596905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:58.193 [2024-11-19 11:56:11.596913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:58.193 [2024-11-19 11:56:11.596920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:58.193 [2024-11-19 11:56:11.596938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:58.193 [2024-11-19 11:56:11.596960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:58.193 [2024-11-19 11:56:11.596983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:58.193 [2024-11-19 11:56:11.596990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.193 [2024-11-19 11:56:11.596998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:58.193 [2024-11-19 11:56:11.597005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:58.193 [2024-11-19 11:56:11.597012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.193 [2024-11-19 11:56:11.597019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:58.193 [2024-11-19 11:56:11.597026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:58.193 [2024-11-19 11:56:11.597033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:58.193 [2024-11-19 11:56:11.597039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:58.193 [2024-11-19 11:56:11.597047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:58.193 [2024-11-19 11:56:11.597054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.193 [2024-11-19 11:56:11.597060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:58.193 [2024-11-19 11:56:11.597067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:58.193 [2024-11-19 11:56:11.597073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:58.193 [2024-11-19 11:56:11.597079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:58.193 [2024-11-19 11:56:11.597086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:58.193 [2024-11-19 11:56:11.597092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.193 [2024-11-19 11:56:11.597098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:58.193 [2024-11-19 11:56:11.597104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:58.193 [2024-11-19 11:56:11.597110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.193 [2024-11-19 11:56:11.597116] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:58.193 [2024-11-19 11:56:11.597124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:58.193 [2024-11-19 11:56:11.597133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:58.193 [2024-11-19 11:56:11.597141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:58.194 [2024-11-19 11:56:11.597149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:58.194 [2024-11-19 11:56:11.597157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:58.194 [2024-11-19 11:56:11.597163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:58.194 [2024-11-19 11:56:11.597170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:58.194 [2024-11-19 11:56:11.597176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:58.194 [2024-11-19 11:56:11.597182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:58.194 [2024-11-19 11:56:11.597190] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:58.194 [2024-11-19 11:56:11.597199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:58.194 [2024-11-19 11:56:11.597215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:58.194 [2024-11-19 11:56:11.597222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:58.194 [2024-11-19 11:56:11.597229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:58.194 [2024-11-19 11:56:11.597236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:58.194 [2024-11-19 11:56:11.597243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:58.194 [2024-11-19 11:56:11.597250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:58.194 [2024-11-19 11:56:11.597256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:58.194 [2024-11-19 11:56:11.597264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:58.194 [2024-11-19 11:56:11.597280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:58.194 [2024-11-19 11:56:11.597315] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:58.194 [2024-11-19 11:56:11.597322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:58.194 [2024-11-19 11:56:11.597337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:58.194 [2024-11-19 11:56:11.597343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:58.194 [2024-11-19 11:56:11.597350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:58.194 [2024-11-19 11:56:11.597357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.194 [2024-11-19 11:56:11.597368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:58.194 [2024-11-19 11:56:11.597375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:24:58.194 [2024-11-19 11:56:11.597384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.613395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.613469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.457 [2024-11-19 11:56:11.613483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.954 ms 00:24:58.457 [2024-11-19 11:56:11.613490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.613596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.613611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:58.457 [2024-11-19 11:56:11.613624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:58.457 [2024-11-19 11:56:11.613631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.622341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.622393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.457 [2024-11-19 11:56:11.622426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.636 ms 00:24:58.457 [2024-11-19 11:56:11.622437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.622485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.622497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.457 [2024-11-19 11:56:11.622507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:58.457 [2024-11-19 11:56:11.622516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.622882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.622914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.457 [2024-11-19 11:56:11.622926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:24:58.457 [2024-11-19 11:56:11.622935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.623094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.623114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.457 [2024-11-19 11:56:11.623126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:58.457 [2024-11-19 11:56:11.623136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.628006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.628047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.457 [2024-11-19 11:56:11.628058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.845 ms 00:24:58.457 [2024-11-19 11:56:11.628067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.630483] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:58.457 [2024-11-19 11:56:11.630517] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:58.457 [2024-11-19 11:56:11.630528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.630536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:58.457 [2024-11-19 11:56:11.630545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:24:58.457 [2024-11-19 11:56:11.630552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.645019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.645078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:58.457 [2024-11-19 11:56:11.645090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.429 ms 00:24:58.457 [2024-11-19 11:56:11.645097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.647208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.647242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:58.457 [2024-11-19 11:56:11.647252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:24:58.457 [2024-11-19 11:56:11.647259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.649148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.649190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:58.457 [2024-11-19 11:56:11.649201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:24:58.457 [2024-11-19 11:56:11.649208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.649584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.649606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:58.457 [2024-11-19 11:56:11.649615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:24:58.457 [2024-11-19 11:56:11.649623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.664514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.664574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:58.457 [2024-11-19 11:56:11.664586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.874 ms 00:24:58.457 [2024-11-19 11:56:11.664594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.672039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:58.457 [2024-11-19 11:56:11.674969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.675002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:58.457 [2024-11-19 11:56:11.675023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.323 ms 00:24:58.457 [2024-11-19 11:56:11.675031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.675101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.675111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:58.457 [2024-11-19 11:56:11.675124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:58.457 [2024-11-19 11:56:11.675131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.676822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.676926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:58.457 [2024-11-19 11:56:11.676982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:24:58.457 [2024-11-19 11:56:11.677010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.677054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.677081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:58.457 [2024-11-19 11:56:11.677258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:58.457 [2024-11-19 11:56:11.677269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.677305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:58.457 [2024-11-19 11:56:11.677318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.677329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:58.457 [2024-11-19 11:56:11.677336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:58.457 [2024-11-19 11:56:11.677343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.680798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.680834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:58.457 [2024-11-19 11:56:11.680843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:24:58.457 [2024-11-19 11:56:11.680850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.680920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.457 [2024-11-19 11:56:11.680929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:58.457 [2024-11-19 11:56:11.680938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:58.457 [2024-11-19 11:56:11.680945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.457 [2024-11-19 11:56:11.681830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.042 ms, result 0 00:24:59.842  [2024-11-19T11:56:14.199Z] Copying: 968/1048576 [kB] (968 kBps) [2024-11-19T11:56:15.145Z] Copying: 5632/1048576 [kB] (4664 kBps) [2024-11-19T11:56:16.088Z] Copying: 55/1024 [MB] (49 MBps) [2024-11-19T11:56:17.027Z] Copying: 105/1024 [MB] (50 MBps) [2024-11-19T11:56:17.966Z] Copying: 152/1024 [MB] (47 MBps) [2024-11-19T11:56:18.901Z] Copying: 191/1024 [MB] (38 MBps) [2024-11-19T11:56:20.286Z] Copying: 246/1024 [MB] (55 MBps) [2024-11-19T11:56:21.221Z] Copying: 307/1024 [MB] (60 MBps) [2024-11-19T11:56:22.157Z] Copying: 359/1024 [MB] (52 MBps) [2024-11-19T11:56:23.097Z] Copying: 393/1024 [MB] (34 MBps) [2024-11-19T11:56:24.041Z] Copying: 421/1024 [MB] (28 MBps) [2024-11-19T11:56:24.983Z] Copying: 446/1024 [MB] (25 MBps) [2024-11-19T11:56:25.926Z] Copying: 475/1024 [MB] (28 MBps) [2024-11-19T11:56:26.871Z] Copying: 505/1024 [MB] (30 MBps) [2024-11-19T11:56:28.259Z] Copying: 540/1024 [MB] (35 MBps) [2024-11-19T11:56:29.213Z] Copying: 559/1024 [MB] (18 MBps) [2024-11-19T11:56:30.157Z] Copying: 588/1024 [MB] (28 MBps) [2024-11-19T11:56:31.099Z] Copying: 605/1024 [MB] (17 MBps) [2024-11-19T11:56:32.041Z] Copying: 621/1024 [MB] (15 MBps) [2024-11-19T11:56:32.983Z] Copying: 637/1024 [MB] (16 MBps) [2024-11-19T11:56:33.928Z] Copying: 652/1024 [MB] (15 MBps) [2024-11-19T11:56:34.872Z] Copying: 667/1024 [MB] (14 MBps) [2024-11-19T11:56:36.266Z] Copying: 682/1024 [MB] (15 MBps) [2024-11-19T11:56:37.213Z] Copying: 698/1024 [MB] (15 MBps) [2024-11-19T11:56:38.158Z] Copying: 713/1024 [MB] (15 MBps) [2024-11-19T11:56:39.099Z] Copying: 728/1024 [MB] (15 MBps) [2024-11-19T11:56:40.107Z] Copying: 746/1024 [MB] (18 MBps) [2024-11-19T11:56:41.043Z] Copying: 763/1024 [MB] (17 MBps) [2024-11-19T11:56:41.978Z] Copying: 779/1024 [MB] (15 MBps) [2024-11-19T11:56:42.912Z] Copying: 795/1024 [MB] (16 MBps) [2024-11-19T11:56:44.289Z] Copying: 812/1024 [MB] (17 MBps) [2024-11-19T11:56:45.228Z] Copying: 830/1024 [MB] (17 MBps) [2024-11-19T11:56:46.164Z] Copying: 848/1024 [MB] (18 MBps) [2024-11-19T11:56:47.113Z] Copying: 868/1024 [MB] (19 MBps) [2024-11-19T11:56:48.046Z] Copying: 884/1024 [MB] (16 MBps) [2024-11-19T11:56:48.979Z] Copying: 901/1024 [MB] (16 MBps) [2024-11-19T11:56:49.912Z] Copying: 918/1024 [MB] (17 MBps) [2024-11-19T11:56:51.285Z] Copying: 935/1024 [MB] (17 MBps) [2024-11-19T11:56:52.216Z] Copying: 951/1024 [MB] (16 MBps) [2024-11-19T11:56:53.151Z] Copying: 968/1024 [MB] (17 MBps) [2024-11-19T11:56:54.086Z] Copying: 985/1024 [MB] (17 MBps) [2024-11-19T11:56:55.020Z] Copying: 1002/1024 [MB] (16 MBps) [2024-11-19T11:56:55.278Z] Copying: 1019/1024 [MB] (16 MBps) [2024-11-19T11:56:55.538Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 11:56:55.383708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.384196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:42.126 [2024-11-19 11:56:55.384278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.126 [2024-11-19 11:56:55.384303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.384368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:42.126 [2024-11-19 11:56:55.384928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.385158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:42.126 [2024-11-19 11:56:55.385205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:25:42.126 [2024-11-19 11:56:55.385300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.385762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.385905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:42.126 [2024-11-19 11:56:55.386001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:25:42.126 [2024-11-19 11:56:55.386253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.400429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.400535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:42.126 [2024-11-19 11:56:55.400588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.116 ms 00:25:42.126 [2024-11-19 11:56:55.400616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.406874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.406981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:42.126 [2024-11-19 11:56:55.407030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:25:42.126 [2024-11-19 11:56:55.407053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.408525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.408628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:42.126 [2024-11-19 11:56:55.408673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:25:42.126 [2024-11-19 11:56:55.408695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.411873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.411970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:42.126 [2024-11-19 11:56:55.412026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.140 ms 00:25:42.126 [2024-11-19 11:56:55.412054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.414189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.414282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:42.126 [2024-11-19 11:56:55.414327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:25:42.126 [2024-11-19 11:56:55.414356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.416268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.416367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:42.126 [2024-11-19 11:56:55.416423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:25:42.126 [2024-11-19 11:56:55.416446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.417901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.417999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:42.126 [2024-11-19 11:56:55.418042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:25:42.126 [2024-11-19 11:56:55.418064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.420176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.420302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:42.126 [2024-11-19 11:56:55.420354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.074 ms 00:25:42.126 [2024-11-19 11:56:55.420376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.421791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.126 [2024-11-19 11:56:55.421915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:42.126 [2024-11-19 11:56:55.421972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:25:42.126 [2024-11-19 11:56:55.421994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.126 [2024-11-19 11:56:55.422288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:42.126 [2024-11-19 11:56:55.422698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:42.126 [2024-11-19 11:56:55.422723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:42.126 [2024-11-19 11:56:55.422733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:42.126 [2024-11-19 11:56:55.422902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.422993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:42.127 [2024-11-19 11:56:55.423483] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:42.127 [2024-11-19 11:56:55.423491] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2150e01d-a4f7-4434-84b1-1538dcc0d293 00:25:42.127 [2024-11-19 11:56:55.423504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:42.127 [2024-11-19 11:56:55.423512] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 134848 00:25:42.127 [2024-11-19 11:56:55.423518] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 132864 00:25:42.127 [2024-11-19 11:56:55.423527] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:25:42.127 [2024-11-19 11:56:55.423535] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:42.127 [2024-11-19 11:56:55.423543] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:42.127 [2024-11-19 11:56:55.423550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:42.127 [2024-11-19 11:56:55.423557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:42.127 [2024-11-19 11:56:55.423563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:42.127 [2024-11-19 11:56:55.423572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.127 [2024-11-19 11:56:55.423580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:42.127 [2024-11-19 11:56:55.423588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:25:42.127 [2024-11-19 11:56:55.423595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.127 [2024-11-19 11:56:55.424997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.127 [2024-11-19 11:56:55.425014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:42.128 [2024-11-19 11:56:55.425029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:25:42.128 [2024-11-19 11:56:55.425036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.425113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.128 [2024-11-19 11:56:55.425121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:42.128 [2024-11-19 11:56:55.425129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:42.128 [2024-11-19 11:56:55.425140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.429461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.429488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.128 [2024-11-19 11:56:55.429497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.429504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.429554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.429562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.128 [2024-11-19 11:56:55.429569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.429581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.429634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.429644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.128 [2024-11-19 11:56:55.429652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.429658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.429673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.429680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.128 [2024-11-19 11:56:55.429687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.429694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.438195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.438231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.128 [2024-11-19 11:56:55.438242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.438250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.128 [2024-11-19 11:56:55.445192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.445207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.128 [2024-11-19 11:56:55.445249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.445260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.128 [2024-11-19 11:56:55.445316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.445324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.128 [2024-11-19 11:56:55.445593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.445625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:42.128 [2024-11-19 11:56:55.445715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.445733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.128 [2024-11-19 11:56:55.445825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.445844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.445896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:42.128 [2024-11-19 11:56:55.445963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.128 [2024-11-19 11:56:55.445993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:42.128 [2024-11-19 11:56:55.446013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.128 [2024-11-19 11:56:55.446139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.403 ms, result 0 00:25:42.387 00:25:42.387 00:25:42.387 11:56:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:44.955 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:44.955 11:56:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.955 [2024-11-19 11:56:57.855884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:44.955 [2024-11-19 11:56:57.856037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90916 ] 00:25:44.955 [2024-11-19 11:56:57.990917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.955 [2024-11-19 11:56:58.040479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.955 [2024-11-19 11:56:58.143062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.955 [2024-11-19 11:56:58.143135] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.955 [2024-11-19 11:56:58.301166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.301360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:44.955 [2024-11-19 11:56:58.301385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:44.955 [2024-11-19 11:56:58.301394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.301467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.301479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.955 [2024-11-19 11:56:58.301488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:44.955 [2024-11-19 11:56:58.301501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.301521] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:44.955 [2024-11-19 11:56:58.301826] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:44.955 [2024-11-19 11:56:58.301853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.301860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.955 [2024-11-19 11:56:58.301869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:25:44.955 [2024-11-19 11:56:58.301876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.303156] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:44.955 [2024-11-19 11:56:58.306077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.306113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:44.955 [2024-11-19 11:56:58.306129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.923 ms 00:25:44.955 [2024-11-19 11:56:58.306137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.306192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.306206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:44.955 [2024-11-19 11:56:58.306214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:44.955 [2024-11-19 11:56:58.306221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.311579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.311716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.955 [2024-11-19 11:56:58.311730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.308 ms 00:25:44.955 [2024-11-19 11:56:58.311738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.311826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.311839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.955 [2024-11-19 11:56:58.311846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:44.955 [2024-11-19 11:56:58.311856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.311901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.311911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:44.955 [2024-11-19 11:56:58.311918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:44.955 [2024-11-19 11:56:58.311925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.311945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.955 [2024-11-19 11:56:58.313443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.313469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.955 [2024-11-19 11:56:58.313477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.503 ms 00:25:44.955 [2024-11-19 11:56:58.313484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.313512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.955 [2024-11-19 11:56:58.313520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:44.955 [2024-11-19 11:56:58.313528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:44.955 [2024-11-19 11:56:58.313535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.955 [2024-11-19 11:56:58.313553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:44.955 [2024-11-19 11:56:58.313577] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:44.955 [2024-11-19 11:56:58.313616] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:44.955 [2024-11-19 11:56:58.313634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:44.955 [2024-11-19 11:56:58.313738] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:44.955 [2024-11-19 11:56:58.313747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:44.955 [2024-11-19 11:56:58.313761] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:44.956 [2024-11-19 11:56:58.313771] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:44.956 [2024-11-19 11:56:58.313782] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:44.956 [2024-11-19 11:56:58.313793] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:44.956 [2024-11-19 11:56:58.313800] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:44.956 [2024-11-19 11:56:58.313808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:44.956 [2024-11-19 11:56:58.313814] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:44.956 [2024-11-19 11:56:58.313822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.956 [2024-11-19 11:56:58.313829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:44.956 [2024-11-19 11:56:58.313836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:25:44.956 [2024-11-19 11:56:58.313843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.956 [2024-11-19 11:56:58.313930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.956 [2024-11-19 11:56:58.313938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:44.956 [2024-11-19 11:56:58.313947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:44.956 [2024-11-19 11:56:58.313954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.956 [2024-11-19 11:56:58.314052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:44.956 [2024-11-19 11:56:58.314062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:44.956 [2024-11-19 11:56:58.314073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:44.956 [2024-11-19 11:56:58.314098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:44.956 [2024-11-19 11:56:58.314119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.956 [2024-11-19 11:56:58.314132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:44.956 [2024-11-19 11:56:58.314141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:44.956 [2024-11-19 11:56:58.314147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.956 [2024-11-19 11:56:58.314154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:44.956 [2024-11-19 11:56:58.314163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:44.956 [2024-11-19 11:56:58.314169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:44.956 [2024-11-19 11:56:58.314183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:44.956 [2024-11-19 11:56:58.314202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:44.956 [2024-11-19 11:56:58.314221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:44.956 [2024-11-19 11:56:58.314240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:44.956 [2024-11-19 11:56:58.314265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:44.956 [2024-11-19 11:56:58.314283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.956 [2024-11-19 11:56:58.314296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:44.956 [2024-11-19 11:56:58.314302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:44.956 [2024-11-19 11:56:58.314308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.956 [2024-11-19 11:56:58.314314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:44.956 [2024-11-19 11:56:58.314321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:44.956 [2024-11-19 11:56:58.314327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:44.956 [2024-11-19 11:56:58.314339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:44.956 [2024-11-19 11:56:58.314346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:44.956 [2024-11-19 11:56:58.314362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:44.956 [2024-11-19 11:56:58.314369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.956 [2024-11-19 11:56:58.314386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:44.956 [2024-11-19 11:56:58.314393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:44.956 [2024-11-19 11:56:58.314400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:44.956 [2024-11-19 11:56:58.314418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:44.956 [2024-11-19 11:56:58.314425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:44.956 [2024-11-19 11:56:58.314431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:44.956 [2024-11-19 11:56:58.314440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:44.956 [2024-11-19 11:56:58.314449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:44.956 [2024-11-19 11:56:58.314464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:44.956 [2024-11-19 11:56:58.314471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:44.956 [2024-11-19 11:56:58.314478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:44.956 [2024-11-19 11:56:58.314487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:44.956 [2024-11-19 11:56:58.314495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:44.956 [2024-11-19 11:56:58.314502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:44.956 [2024-11-19 11:56:58.314509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:44.956 [2024-11-19 11:56:58.314516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:44.956 [2024-11-19 11:56:58.314528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:44.956 [2024-11-19 11:56:58.314563] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:44.956 [2024-11-19 11:56:58.314575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:44.956 [2024-11-19 11:56:58.314591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:44.956 [2024-11-19 11:56:58.314598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:44.956 [2024-11-19 11:56:58.314605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:44.956 [2024-11-19 11:56:58.314614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.956 [2024-11-19 11:56:58.314622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:44.956 [2024-11-19 11:56:58.314629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:25:44.956 [2024-11-19 11:56:58.314636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.956 [2024-11-19 11:56:58.331692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.956 [2024-11-19 11:56:58.331857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.956 [2024-11-19 11:56:58.331927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.996 ms 00:25:44.956 [2024-11-19 11:56:58.331955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.956 [2024-11-19 11:56:58.332070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.956 [2024-11-19 11:56:58.332109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:44.957 [2024-11-19 11:56:58.332132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:44.957 [2024-11-19 11:56:58.332153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 11:56:58.340896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 11:56:58.341010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.957 [2024-11-19 11:56:58.341058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.665 ms 00:25:44.957 [2024-11-19 11:56:58.341080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 11:56:58.341120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 11:56:58.341142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.957 [2024-11-19 11:56:58.341162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:44.957 [2024-11-19 11:56:58.341180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 11:56:58.341549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 11:56:58.341604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.957 [2024-11-19 11:56:58.341625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:25:44.957 [2024-11-19 11:56:58.341643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 11:56:58.341786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 11:56:58.341808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.957 [2024-11-19 11:56:58.341879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:44.957 [2024-11-19 11:56:58.341898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 11:56:58.346580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 11:56:58.346679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.957 [2024-11-19 11:56:58.346731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:25:44.957 [2024-11-19 11:56:58.346752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.957 [2024-11-19 11:56:58.349510] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:44.957 [2024-11-19 11:56:58.349632] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:44.957 [2024-11-19 11:56:58.349695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.957 [2024-11-19 11:56:58.349715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:44.957 [2024-11-19 11:56:58.349734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:25:44.957 [2024-11-19 11:56:58.349752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.371919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.372066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:45.216 [2024-11-19 11:56:58.372133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.125 ms 00:25:45.216 [2024-11-19 11:56:58.372157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.374000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.374101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:45.216 [2024-11-19 11:56:58.374147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:25:45.216 [2024-11-19 11:56:58.374168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.376227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.376351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:45.216 [2024-11-19 11:56:58.376403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:25:45.216 [2024-11-19 11:56:58.376437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.376822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.376911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:45.216 [2024-11-19 11:56:58.376938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:25:45.216 [2024-11-19 11:56:58.376957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.391691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.391831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:45.216 [2024-11-19 11:56:58.391887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.641 ms 00:25:45.216 [2024-11-19 11:56:58.391909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.399301] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:45.216 [2024-11-19 11:56:58.401590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.401684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:45.216 [2024-11-19 11:56:58.401728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.640 ms 00:25:45.216 [2024-11-19 11:56:58.401757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.401822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.401849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:45.216 [2024-11-19 11:56:58.401869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:45.216 [2024-11-19 11:56:58.401887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.402499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.402589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:45.216 [2024-11-19 11:56:58.402602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:25:45.216 [2024-11-19 11:56:58.402618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.402653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.402661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.216 [2024-11-19 11:56:58.402670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:45.216 [2024-11-19 11:56:58.402677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.402707] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:45.216 [2024-11-19 11:56:58.402716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.402723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:45.216 [2024-11-19 11:56:58.402731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:45.216 [2024-11-19 11:56:58.402738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.406213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.406246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.216 [2024-11-19 11:56:58.406256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.454 ms 00:25:45.216 [2024-11-19 11:56:58.406263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.406333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.216 [2024-11-19 11:56:58.406343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.216 [2024-11-19 11:56:58.406351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:45.216 [2024-11-19 11:56:58.406358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.216 [2024-11-19 11:56:58.407234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.688 ms, result 0 00:25:46.590  [2024-11-19T11:57:00.938Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T11:57:01.872Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-19T11:57:02.808Z] Copying: 35/1024 [MB] (11 MBps) [2024-11-19T11:57:03.745Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-19T11:57:04.682Z] Copying: 57/1024 [MB] (10 MBps) [2024-11-19T11:57:05.624Z] Copying: 68/1024 [MB] (11 MBps) [2024-11-19T11:57:07.004Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-19T11:57:07.939Z] Copying: 103/1024 [MB] (23 MBps) [2024-11-19T11:57:08.874Z] Copying: 127/1024 [MB] (23 MBps) [2024-11-19T11:57:09.805Z] Copying: 150/1024 [MB] (23 MBps) [2024-11-19T11:57:10.740Z] Copying: 172/1024 [MB] (21 MBps) [2024-11-19T11:57:11.674Z] Copying: 196/1024 [MB] (24 MBps) [2024-11-19T11:57:12.608Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-19T11:57:13.979Z] Copying: 221/1024 [MB] (14 MBps) [2024-11-19T11:57:14.914Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-19T11:57:15.848Z] Copying: 244/1024 [MB] (11 MBps) [2024-11-19T11:57:16.781Z] Copying: 260/1024 [MB] (15 MBps) [2024-11-19T11:57:17.718Z] Copying: 274/1024 [MB] (14 MBps) [2024-11-19T11:57:18.656Z] Copying: 289/1024 [MB] (14 MBps) [2024-11-19T11:57:19.596Z] Copying: 308/1024 [MB] (18 MBps) [2024-11-19T11:57:20.968Z] Copying: 324/1024 [MB] (16 MBps) [2024-11-19T11:57:21.902Z] Copying: 342/1024 [MB] (18 MBps) [2024-11-19T11:57:22.842Z] Copying: 369/1024 [MB] (26 MBps) [2024-11-19T11:57:23.786Z] Copying: 388/1024 [MB] (18 MBps) [2024-11-19T11:57:24.730Z] Copying: 404/1024 [MB] (15 MBps) [2024-11-19T11:57:25.678Z] Copying: 420/1024 [MB] (16 MBps) [2024-11-19T11:57:26.621Z] Copying: 436/1024 [MB] (15 MBps) [2024-11-19T11:57:27.644Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-19T11:57:28.584Z] Copying: 459/1024 [MB] (12 MBps) [2024-11-19T11:57:29.966Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-19T11:57:30.899Z] Copying: 481/1024 [MB] (10 MBps) [2024-11-19T11:57:31.831Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-19T11:57:32.766Z] Copying: 503/1024 [MB] (11 MBps) [2024-11-19T11:57:33.700Z] Copying: 515/1024 [MB] (11 MBps) [2024-11-19T11:57:34.663Z] Copying: 527/1024 [MB] (12 MBps) [2024-11-19T11:57:35.597Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-19T11:57:36.970Z] Copying: 551/1024 [MB] (12 MBps) [2024-11-19T11:57:37.902Z] Copying: 563/1024 [MB] (11 MBps) [2024-11-19T11:57:38.835Z] Copying: 574/1024 [MB] (11 MBps) [2024-11-19T11:57:39.766Z] Copying: 586/1024 [MB] (11 MBps) [2024-11-19T11:57:40.699Z] Copying: 597/1024 [MB] (11 MBps) [2024-11-19T11:57:41.632Z] Copying: 609/1024 [MB] (11 MBps) [2024-11-19T11:57:43.010Z] Copying: 621/1024 [MB] (12 MBps) [2024-11-19T11:57:43.945Z] Copying: 635/1024 [MB] (14 MBps) [2024-11-19T11:57:44.884Z] Copying: 648/1024 [MB] (12 MBps) [2024-11-19T11:57:45.825Z] Copying: 660/1024 [MB] (11 MBps) [2024-11-19T11:57:46.759Z] Copying: 671/1024 [MB] (11 MBps) [2024-11-19T11:57:47.693Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-19T11:57:48.638Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-19T11:57:50.010Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-19T11:57:50.942Z] Copying: 715/1024 [MB] (11 MBps) [2024-11-19T11:57:51.875Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-19T11:57:52.810Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-19T11:57:53.744Z] Copying: 749/1024 [MB] (11 MBps) [2024-11-19T11:57:54.678Z] Copying: 761/1024 [MB] (12 MBps) [2024-11-19T11:57:55.612Z] Copying: 773/1024 [MB] (12 MBps) [2024-11-19T11:57:56.986Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-19T11:57:57.920Z] Copying: 797/1024 [MB] (11 MBps) [2024-11-19T11:57:58.855Z] Copying: 808/1024 [MB] (11 MBps) [2024-11-19T11:57:59.787Z] Copying: 820/1024 [MB] (11 MBps) [2024-11-19T11:58:00.720Z] Copying: 831/1024 [MB] (11 MBps) [2024-11-19T11:58:01.653Z] Copying: 843/1024 [MB] (11 MBps) [2024-11-19T11:58:02.587Z] Copying: 854/1024 [MB] (11 MBps) [2024-11-19T11:58:03.971Z] Copying: 866/1024 [MB] (11 MBps) [2024-11-19T11:58:04.902Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-19T11:58:05.836Z] Copying: 888/1024 [MB] (10 MBps) [2024-11-19T11:58:06.774Z] Copying: 898/1024 [MB] (10 MBps) [2024-11-19T11:58:07.706Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-19T11:58:08.639Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-19T11:58:09.640Z] Copying: 930/1024 [MB] (10 MBps) [2024-11-19T11:58:11.015Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-19T11:58:11.582Z] Copying: 955/1024 [MB] (13 MBps) [2024-11-19T11:58:12.956Z] Copying: 968/1024 [MB] (13 MBps) [2024-11-19T11:58:13.890Z] Copying: 983/1024 [MB] (14 MBps) [2024-11-19T11:58:14.824Z] Copying: 994/1024 [MB] (11 MBps) [2024-11-19T11:58:15.841Z] Copying: 1005/1024 [MB] (10 MBps) [2024-11-19T11:58:16.410Z] Copying: 1016/1024 [MB] (11 MBps) [2024-11-19T11:58:16.410Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-19 11:58:16.317659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.317738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:02.998 [2024-11-19 11:58:16.317765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:02.998 [2024-11-19 11:58:16.317782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.317817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:02.998 [2024-11-19 11:58:16.318615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.318642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:02.998 [2024-11-19 11:58:16.318655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:27:02.998 [2024-11-19 11:58:16.318666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.318985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.319006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:02.998 [2024-11-19 11:58:16.319018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:27:02.998 [2024-11-19 11:58:16.319028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.324461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.324498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:02.998 [2024-11-19 11:58:16.324508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.411 ms 00:27:02.998 [2024-11-19 11:58:16.324516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.331209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.331244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:02.998 [2024-11-19 11:58:16.331254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.671 ms 00:27:02.998 [2024-11-19 11:58:16.331270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.333545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.333578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:02.998 [2024-11-19 11:58:16.333588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:27:02.998 [2024-11-19 11:58:16.333597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.337535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.337577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:02.998 [2024-11-19 11:58:16.337588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:27:02.998 [2024-11-19 11:58:16.337596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.342237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.342270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:02.998 [2024-11-19 11:58:16.342280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.602 ms 00:27:02.998 [2024-11-19 11:58:16.342288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.344916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.344948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:02.998 [2024-11-19 11:58:16.344956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:27:02.998 [2024-11-19 11:58:16.344963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.347401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.347438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:02.998 [2024-11-19 11:58:16.347447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:27:02.998 [2024-11-19 11:58:16.347453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.349325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.349356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:02.998 [2024-11-19 11:58:16.349364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:27:02.998 [2024-11-19 11:58:16.349371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.351064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:02.998 [2024-11-19 11:58:16.351092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:02.998 [2024-11-19 11:58:16.351100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:27:02.998 [2024-11-19 11:58:16.351106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:02.998 [2024-11-19 11:58:16.351131] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:02.999 [2024-11-19 11:58:16.351153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:02.999 [2024-11-19 11:58:16.351162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:02.999 [2024-11-19 11:58:16.351171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:02.999 [2024-11-19 11:58:16.351754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:03.000 [2024-11-19 11:58:16.351916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:03.000 [2024-11-19 11:58:16.351924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2150e01d-a4f7-4434-84b1-1538dcc0d293 00:27:03.000 [2024-11-19 11:58:16.351932] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:03.000 [2024-11-19 11:58:16.351938] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:03.000 [2024-11-19 11:58:16.351945] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:03.000 [2024-11-19 11:58:16.351953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:03.000 [2024-11-19 11:58:16.351959] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:03.000 [2024-11-19 11:58:16.351971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:03.000 [2024-11-19 11:58:16.351978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:03.000 [2024-11-19 11:58:16.351985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:03.000 [2024-11-19 11:58:16.351991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:03.000 [2024-11-19 11:58:16.351997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.000 [2024-11-19 11:58:16.352005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:03.000 [2024-11-19 11:58:16.352019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:27:03.000 [2024-11-19 11:58:16.352026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.353461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.000 [2024-11-19 11:58:16.353481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:03.000 [2024-11-19 11:58:16.353490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:27:03.000 [2024-11-19 11:58:16.353497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.353588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.000 [2024-11-19 11:58:16.353597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:03.000 [2024-11-19 11:58:16.353605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:03.000 [2024-11-19 11:58:16.353612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.357911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.357940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:03.000 [2024-11-19 11:58:16.357949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.357956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.358007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.358015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.000 [2024-11-19 11:58:16.358022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.358029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.358061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.358074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.000 [2024-11-19 11:58:16.358082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.358089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.358103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.358113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.000 [2024-11-19 11:58:16.358120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.358127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.366563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.366601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.000 [2024-11-19 11:58:16.366621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.366629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.000 [2024-11-19 11:58:16.374136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.374144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.000 [2024-11-19 11:58:16.374223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.374243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.000 [2024-11-19 11:58:16.374287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.374297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.000 [2024-11-19 11:58:16.374379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.374386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:03.000 [2024-11-19 11:58:16.374457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.374470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.000 [2024-11-19 11:58:16.374518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.000 [2024-11-19 11:58:16.374525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.000 [2024-11-19 11:58:16.374564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:03.000 [2024-11-19 11:58:16.374573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.001 [2024-11-19 11:58:16.374584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:03.001 [2024-11-19 11:58:16.374594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.001 [2024-11-19 11:58:16.374744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.047 ms, result 0 00:27:03.259 00:27:03.259 00:27:03.259 11:58:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:05.789 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:05.789 Process with pid 89368 is not found 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89368 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89368 ']' 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89368 00:27:05.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89368) - No such process 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89368 is not found' 00:27:05.789 11:58:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:06.047 Remove shared memory files 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:06.047 ************************************ 00:27:06.047 END TEST ftl_dirty_shutdown 00:27:06.047 ************************************ 00:27:06.047 00:27:06.047 real 3m44.906s 00:27:06.047 user 4m0.839s 00:27:06.047 sys 0m22.863s 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.047 11:58:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.047 11:58:19 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:06.047 11:58:19 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:06.047 11:58:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.047 11:58:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:06.047 ************************************ 00:27:06.047 START TEST ftl_upgrade_shutdown 00:27:06.047 ************************************ 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:06.047 * Looking for test storage... 00:27:06.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:06.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.047 --rc genhtml_branch_coverage=1 00:27:06.047 --rc genhtml_function_coverage=1 00:27:06.047 --rc genhtml_legend=1 00:27:06.047 --rc geninfo_all_blocks=1 00:27:06.047 --rc geninfo_unexecuted_blocks=1 00:27:06.047 00:27:06.047 ' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:06.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.047 --rc genhtml_branch_coverage=1 00:27:06.047 --rc genhtml_function_coverage=1 00:27:06.047 --rc genhtml_legend=1 00:27:06.047 --rc geninfo_all_blocks=1 00:27:06.047 --rc geninfo_unexecuted_blocks=1 00:27:06.047 00:27:06.047 ' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:06.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.047 --rc genhtml_branch_coverage=1 00:27:06.047 --rc genhtml_function_coverage=1 00:27:06.047 --rc genhtml_legend=1 00:27:06.047 --rc geninfo_all_blocks=1 00:27:06.047 --rc geninfo_unexecuted_blocks=1 00:27:06.047 00:27:06.047 ' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:06.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.047 --rc genhtml_branch_coverage=1 00:27:06.047 --rc genhtml_function_coverage=1 00:27:06.047 --rc genhtml_legend=1 00:27:06.047 --rc geninfo_all_blocks=1 00:27:06.047 --rc geninfo_unexecuted_blocks=1 00:27:06.047 00:27:06.047 ' 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:06.047 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91820 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91820 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91820 ']' 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.306 11:58:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:06.306 [2024-11-19 11:58:19.541236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:06.306 [2024-11-19 11:58:19.541523] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91820 ] 00:27:06.306 [2024-11-19 11:58:19.676963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.306 [2024-11-19 11:58:19.710864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:07.243 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:07.501 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:07.502 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:07.761 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:07.761 { 00:27:07.761 "name": "basen1", 00:27:07.761 "aliases": [ 00:27:07.761 "88cebc48-b062-4c74-8427-74023cfae9f4" 00:27:07.761 ], 00:27:07.761 "product_name": "NVMe disk", 00:27:07.761 "block_size": 4096, 00:27:07.761 "num_blocks": 1310720, 00:27:07.761 "uuid": "88cebc48-b062-4c74-8427-74023cfae9f4", 00:27:07.761 "numa_id": -1, 00:27:07.761 "assigned_rate_limits": { 00:27:07.761 "rw_ios_per_sec": 0, 00:27:07.761 "rw_mbytes_per_sec": 0, 00:27:07.761 "r_mbytes_per_sec": 0, 00:27:07.761 "w_mbytes_per_sec": 0 00:27:07.761 }, 00:27:07.761 "claimed": true, 00:27:07.761 "claim_type": "read_many_write_one", 00:27:07.761 "zoned": false, 00:27:07.761 "supported_io_types": { 00:27:07.761 "read": true, 00:27:07.761 "write": true, 00:27:07.761 "unmap": true, 00:27:07.761 "flush": true, 00:27:07.761 "reset": true, 00:27:07.761 "nvme_admin": true, 00:27:07.761 "nvme_io": true, 00:27:07.761 "nvme_io_md": false, 00:27:07.761 "write_zeroes": true, 00:27:07.761 "zcopy": false, 00:27:07.761 "get_zone_info": false, 00:27:07.761 "zone_management": false, 00:27:07.761 "zone_append": false, 00:27:07.761 "compare": true, 00:27:07.761 "compare_and_write": false, 00:27:07.761 "abort": true, 00:27:07.761 "seek_hole": false, 00:27:07.761 "seek_data": false, 00:27:07.761 "copy": true, 00:27:07.761 "nvme_iov_md": false 00:27:07.761 }, 00:27:07.761 "driver_specific": { 00:27:07.761 "nvme": [ 00:27:07.761 { 00:27:07.761 "pci_address": "0000:00:11.0", 00:27:07.761 "trid": { 00:27:07.761 "trtype": "PCIe", 00:27:07.761 "traddr": "0000:00:11.0" 00:27:07.761 }, 00:27:07.761 "ctrlr_data": { 00:27:07.761 "cntlid": 0, 00:27:07.761 "vendor_id": "0x1b36", 00:27:07.761 "model_number": "QEMU NVMe Ctrl", 00:27:07.761 "serial_number": "12341", 00:27:07.761 "firmware_revision": "8.0.0", 00:27:07.761 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:07.761 "oacs": { 00:27:07.761 "security": 0, 00:27:07.761 "format": 1, 00:27:07.761 "firmware": 0, 00:27:07.761 "ns_manage": 1 00:27:07.761 }, 00:27:07.761 "multi_ctrlr": false, 00:27:07.761 "ana_reporting": false 00:27:07.761 }, 00:27:07.761 "vs": { 00:27:07.761 "nvme_version": "1.4" 00:27:07.761 }, 00:27:07.761 "ns_data": { 00:27:07.761 "id": 1, 00:27:07.761 "can_share": false 00:27:07.761 } 00:27:07.761 } 00:27:07.761 ], 00:27:07.761 "mp_policy": "active_passive" 00:27:07.761 } 00:27:07.761 } 00:27:07.761 ]' 00:27:07.761 11:58:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:07.761 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:08.020 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=84f28f1b-67c7-4c6f-a896-449088253f0f 00:27:08.020 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:08.020 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84f28f1b-67c7-4c6f-a896-449088253f0f 00:27:08.279 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:08.279 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=ab89f42e-bf72-47a9-9b79-a5812d1f990e 00:27:08.279 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u ab89f42e-bf72-47a9-9b79-a5812d1f990e 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=128142ef-99ee-4283-a752-c5b48cebb1ec 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 128142ef-99ee-4283-a752-c5b48cebb1ec ]] 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 128142ef-99ee-4283-a752-c5b48cebb1ec 5120 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=128142ef-99ee-4283-a752-c5b48cebb1ec 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 128142ef-99ee-4283-a752-c5b48cebb1ec 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=128142ef-99ee-4283-a752-c5b48cebb1ec 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:08.538 11:58:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 128142ef-99ee-4283-a752-c5b48cebb1ec 00:27:08.796 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:08.796 { 00:27:08.796 "name": "128142ef-99ee-4283-a752-c5b48cebb1ec", 00:27:08.796 "aliases": [ 00:27:08.796 "lvs/basen1p0" 00:27:08.796 ], 00:27:08.796 "product_name": "Logical Volume", 00:27:08.796 "block_size": 4096, 00:27:08.796 "num_blocks": 5242880, 00:27:08.796 "uuid": "128142ef-99ee-4283-a752-c5b48cebb1ec", 00:27:08.796 "assigned_rate_limits": { 00:27:08.796 "rw_ios_per_sec": 0, 00:27:08.796 "rw_mbytes_per_sec": 0, 00:27:08.796 "r_mbytes_per_sec": 0, 00:27:08.796 "w_mbytes_per_sec": 0 00:27:08.796 }, 00:27:08.796 "claimed": false, 00:27:08.796 "zoned": false, 00:27:08.796 "supported_io_types": { 00:27:08.796 "read": true, 00:27:08.796 "write": true, 00:27:08.796 "unmap": true, 00:27:08.796 "flush": false, 00:27:08.796 "reset": true, 00:27:08.796 "nvme_admin": false, 00:27:08.796 "nvme_io": false, 00:27:08.796 "nvme_io_md": false, 00:27:08.796 "write_zeroes": true, 00:27:08.796 "zcopy": false, 00:27:08.796 "get_zone_info": false, 00:27:08.796 "zone_management": false, 00:27:08.796 "zone_append": false, 00:27:08.796 "compare": false, 00:27:08.797 "compare_and_write": false, 00:27:08.797 "abort": false, 00:27:08.797 "seek_hole": true, 00:27:08.797 "seek_data": true, 00:27:08.797 "copy": false, 00:27:08.797 "nvme_iov_md": false 00:27:08.797 }, 00:27:08.797 "driver_specific": { 00:27:08.797 "lvol": { 00:27:08.797 "lvol_store_uuid": "ab89f42e-bf72-47a9-9b79-a5812d1f990e", 00:27:08.797 "base_bdev": "basen1", 00:27:08.797 "thin_provision": true, 00:27:08.797 "num_allocated_clusters": 0, 00:27:08.797 "snapshot": false, 00:27:08.797 "clone": false, 00:27:08.797 "esnap_clone": false 00:27:08.797 } 00:27:08.797 } 00:27:08.797 } 00:27:08.797 ]' 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:08.797 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:09.055 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:09.055 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:09.055 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:09.372 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:09.372 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:09.372 11:58:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 128142ef-99ee-4283-a752-c5b48cebb1ec -c cachen1p0 --l2p_dram_limit 2 00:27:09.646 [2024-11-19 11:58:22.766996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.646 [2024-11-19 11:58:22.767043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:09.646 [2024-11-19 11:58:22.767060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:09.646 [2024-11-19 11:58:22.767070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.767128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.767140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:09.647 [2024-11-19 11:58:22.767148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:09.647 [2024-11-19 11:58:22.767161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.767183] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:09.647 [2024-11-19 11:58:22.767469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:09.647 [2024-11-19 11:58:22.767488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.767498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:09.647 [2024-11-19 11:58:22.767509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:27:09.647 [2024-11-19 11:58:22.767518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.767549] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9d1d4bba-6ce9-44a6-b3fc-6c72ff1f02c6 00:27:09.647 [2024-11-19 11:58:22.768642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.768672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:09.647 [2024-11-19 11:58:22.768688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:09.647 [2024-11-19 11:58:22.768695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.773956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.773982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:09.647 [2024-11-19 11:58:22.773993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.207 ms 00:27:09.647 [2024-11-19 11:58:22.774000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.774083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.774093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:09.647 [2024-11-19 11:58:22.774103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:09.647 [2024-11-19 11:58:22.774113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.774162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.774172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:09.647 [2024-11-19 11:58:22.774181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:09.647 [2024-11-19 11:58:22.774189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.774211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:09.647 [2024-11-19 11:58:22.775720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.775748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:09.647 [2024-11-19 11:58:22.775759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.515 ms 00:27:09.647 [2024-11-19 11:58:22.775768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.775792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.775802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:09.647 [2024-11-19 11:58:22.775810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:09.647 [2024-11-19 11:58:22.775820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.775836] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:09.647 [2024-11-19 11:58:22.775978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:09.647 [2024-11-19 11:58:22.775993] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:09.647 [2024-11-19 11:58:22.776006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:09.647 [2024-11-19 11:58:22.776020] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776030] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:09.647 [2024-11-19 11:58:22.776051] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:09.647 [2024-11-19 11:58:22.776057] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:09.647 [2024-11-19 11:58:22.776066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:09.647 [2024-11-19 11:58:22.776075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.776083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:09.647 [2024-11-19 11:58:22.776090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.239 ms 00:27:09.647 [2024-11-19 11:58:22.776099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.776181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.647 [2024-11-19 11:58:22.776192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:09.647 [2024-11-19 11:58:22.776200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:27:09.647 [2024-11-19 11:58:22.776208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.647 [2024-11-19 11:58:22.776306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:09.647 [2024-11-19 11:58:22.776319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:09.647 [2024-11-19 11:58:22.776328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.647 [2024-11-19 11:58:22.776344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:09.647 [2024-11-19 11:58:22.776352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:09.647 [2024-11-19 11:58:22.776359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:09.647 [2024-11-19 11:58:22.776367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:09.647 [2024-11-19 11:58:22.776375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:09.647 [2024-11-19 11:58:22.776384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.647 [2024-11-19 11:58:22.776391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:09.647 [2024-11-19 11:58:22.776402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:09.647 [2024-11-19 11:58:22.776421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.647 [2024-11-19 11:58:22.776432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:09.647 [2024-11-19 11:58:22.776440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:09.647 [2024-11-19 11:58:22.776448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.647 [2024-11-19 11:58:22.776456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:09.647 [2024-11-19 11:58:22.776465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:09.647 [2024-11-19 11:58:22.776472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.647 [2024-11-19 11:58:22.776481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:09.647 [2024-11-19 11:58:22.776489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:09.647 [2024-11-19 11:58:22.776497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:09.647 [2024-11-19 11:58:22.776513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:09.647 [2024-11-19 11:58:22.776520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:09.647 [2024-11-19 11:58:22.776537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:09.647 [2024-11-19 11:58:22.776545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:09.647 [2024-11-19 11:58:22.776563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:09.647 [2024-11-19 11:58:22.776570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:09.647 [2024-11-19 11:58:22.776579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:09.647 [2024-11-19 11:58:22.776587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:09.647 [2024-11-19 11:58:22.776596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.648 [2024-11-19 11:58:22.776604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:09.648 [2024-11-19 11:58:22.776612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:09.648 [2024-11-19 11:58:22.776619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.648 [2024-11-19 11:58:22.776629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:09.648 [2024-11-19 11:58:22.776637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:09.648 [2024-11-19 11:58:22.776646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.648 [2024-11-19 11:58:22.776653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:09.648 [2024-11-19 11:58:22.776662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:09.648 [2024-11-19 11:58:22.776669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.648 [2024-11-19 11:58:22.776678] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:09.648 [2024-11-19 11:58:22.776687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:09.648 [2024-11-19 11:58:22.776702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:09.648 [2024-11-19 11:58:22.776710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:09.648 [2024-11-19 11:58:22.776721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:09.648 [2024-11-19 11:58:22.776733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:09.648 [2024-11-19 11:58:22.776743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:09.648 [2024-11-19 11:58:22.776750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:09.648 [2024-11-19 11:58:22.776758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:09.648 [2024-11-19 11:58:22.776764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:09.648 [2024-11-19 11:58:22.776775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:09.648 [2024-11-19 11:58:22.776785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:09.648 [2024-11-19 11:58:22.776802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:09.648 [2024-11-19 11:58:22.776828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:09.648 [2024-11-19 11:58:22.776834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:09.648 [2024-11-19 11:58:22.776844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:09.648 [2024-11-19 11:58:22.776851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:09.648 [2024-11-19 11:58:22.776905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:09.648 [2024-11-19 11:58:22.776915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:09.648 [2024-11-19 11:58:22.776931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:09.648 [2024-11-19 11:58:22.776940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:09.648 [2024-11-19 11:58:22.776947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:09.648 [2024-11-19 11:58:22.776956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.648 [2024-11-19 11:58:22.776964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:09.648 [2024-11-19 11:58:22.776975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.715 ms 00:27:09.648 [2024-11-19 11:58:22.776983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.648 [2024-11-19 11:58:22.777027] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:09.648 [2024-11-19 11:58:22.777039] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:12.928 [2024-11-19 11:58:26.073847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.074035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:12.928 [2024-11-19 11:58:26.074149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3296.818 ms 00:27:12.928 [2024-11-19 11:58:26.074178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.082274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.082464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:12.928 [2024-11-19 11:58:26.082536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.014 ms 00:27:12.928 [2024-11-19 11:58:26.082566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.082665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.082695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:12.928 [2024-11-19 11:58:26.082759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:12.928 [2024-11-19 11:58:26.082781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.090614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.090770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:12.928 [2024-11-19 11:58:26.090835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.768 ms 00:27:12.928 [2024-11-19 11:58:26.090902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.090951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.091004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:12.928 [2024-11-19 11:58:26.091059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:12.928 [2024-11-19 11:58:26.091081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.091466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.091567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:12.928 [2024-11-19 11:58:26.091624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:27:12.928 [2024-11-19 11:58:26.091652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.091720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.091877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:12.928 [2024-11-19 11:58:26.091905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:12.928 [2024-11-19 11:58:26.091935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.106223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.106387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:12.928 [2024-11-19 11:58:26.106495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.248 ms 00:27:12.928 [2024-11-19 11:58:26.106564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.115905] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:12.928 [2024-11-19 11:58:26.116823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.116856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:12.928 [2024-11-19 11:58:26.116868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.130 ms 00:27:12.928 [2024-11-19 11:58:26.116878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.127516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.127554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:12.928 [2024-11-19 11:58:26.127569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.609 ms 00:27:12.928 [2024-11-19 11:58:26.127584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.127651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.127662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:12.928 [2024-11-19 11:58:26.127685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:12.928 [2024-11-19 11:58:26.127694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.130271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.130310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:12.928 [2024-11-19 11:58:26.130321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.546 ms 00:27:12.928 [2024-11-19 11:58:26.130331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.132846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.132882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:12.928 [2024-11-19 11:58:26.132892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.491 ms 00:27:12.928 [2024-11-19 11:58:26.132902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.133186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.133202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:12.928 [2024-11-19 11:58:26.133210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:27:12.928 [2024-11-19 11:58:26.133220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.157981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.158018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:12.928 [2024-11-19 11:58:26.158028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.742 ms 00:27:12.928 [2024-11-19 11:58:26.158037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.161661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.161699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:12.928 [2024-11-19 11:58:26.161714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.565 ms 00:27:12.928 [2024-11-19 11:58:26.161724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.164796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.164833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:12.928 [2024-11-19 11:58:26.164843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.038 ms 00:27:12.928 [2024-11-19 11:58:26.164852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.167922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.167960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:12.928 [2024-11-19 11:58:26.167970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.036 ms 00:27:12.928 [2024-11-19 11:58:26.167981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.168022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.168033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:12.928 [2024-11-19 11:58:26.168041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:12.928 [2024-11-19 11:58:26.168050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.168109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.928 [2024-11-19 11:58:26.168120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:12.928 [2024-11-19 11:58:26.168127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:12.928 [2024-11-19 11:58:26.168136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.928 [2024-11-19 11:58:26.169036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3401.682 ms, result 0 00:27:12.928 { 00:27:12.928 "name": "ftl", 00:27:12.928 "uuid": "9d1d4bba-6ce9-44a6-b3fc-6c72ff1f02c6" 00:27:12.928 } 00:27:12.928 11:58:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:13.186 [2024-11-19 11:58:26.376503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.186 11:58:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:13.186 11:58:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:13.444 [2024-11-19 11:58:26.768895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:13.444 11:58:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:13.702 [2024-11-19 11:58:26.965257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:13.702 11:58:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:13.960 Fill FTL, iteration 1 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91937 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91937 /var/tmp/spdk.tgt.sock 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91937 ']' 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:13.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.960 11:58:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:14.218 [2024-11-19 11:58:27.376692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:14.218 [2024-11-19 11:58:27.376959] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91937 ] 00:27:14.218 [2024-11-19 11:58:27.509918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.218 [2024-11-19 11:58:27.541447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.151 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.151 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:15.151 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:15.151 ftln1 00:27:15.151 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:15.151 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91937 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91937 ']' 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91937 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91937 00:27:15.409 killing process with pid 91937 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91937' 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91937 00:27:15.409 11:58:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91937 00:27:15.666 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:15.666 11:58:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:15.666 [2024-11-19 11:58:29.018708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:15.666 [2024-11-19 11:58:29.018815] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91967 ] 00:27:15.940 [2024-11-19 11:58:29.151513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.940 [2024-11-19 11:58:29.182606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.317  [2024-11-19T11:58:31.663Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-19T11:58:32.598Z] Copying: 439/1024 [MB] (223 MBps) [2024-11-19T11:58:33.531Z] Copying: 711/1024 [MB] (272 MBps) [2024-11-19T11:58:33.531Z] Copying: 983/1024 [MB] (272 MBps) [2024-11-19T11:58:33.790Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:27:20.378 00:27:20.378 Calculate MD5 checksum, iteration 1 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:20.378 11:58:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:20.378 [2024-11-19 11:58:33.764675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:20.378 [2024-11-19 11:58:33.764816] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92020 ] 00:27:20.636 [2024-11-19 11:58:33.905905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.636 [2024-11-19 11:58:33.934357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.010  [2024-11-19T11:58:35.680Z] Copying: 697/1024 [MB] (697 MBps) [2024-11-19T11:58:35.938Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:27:22.526 00:27:22.526 11:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:22.526 11:58:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:24.438 Fill FTL, iteration 2 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5fefc836f1be26be6e55b6f51084367a 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:24.438 11:58:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:24.438 [2024-11-19 11:58:37.827579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:24.438 [2024-11-19 11:58:37.827837] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92065 ] 00:27:24.695 [2024-11-19 11:58:37.968318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.695 [2024-11-19 11:58:37.999842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.069  [2024-11-19T11:58:40.416Z] Copying: 205/1024 [MB] (205 MBps) [2024-11-19T11:58:41.350Z] Copying: 404/1024 [MB] (199 MBps) [2024-11-19T11:58:42.284Z] Copying: 587/1024 [MB] (183 MBps) [2024-11-19T11:58:43.219Z] Copying: 814/1024 [MB] (227 MBps) [2024-11-19T11:58:43.219Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:27:29.807 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:29.807 Calculate MD5 checksum, iteration 2 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:29.807 11:58:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.807 [2024-11-19 11:58:43.199675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:29.807 [2024-11-19 11:58:43.199928] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92123 ] 00:27:30.066 [2024-11-19 11:58:43.333901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.066 [2024-11-19 11:58:43.363178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.439  [2024-11-19T11:58:45.416Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-19T11:58:45.982Z] Copying: 1024/1024 [MB] (average 681 MBps) 00:27:32.570 00:27:32.570 11:58:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:32.570 11:58:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:34.500 11:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:34.500 11:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5841198d956047f356ad983578f76136 00:27:34.500 11:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:34.500 11:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:34.500 11:58:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:34.759 [2024-11-19 11:58:48.082813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.759 [2024-11-19 11:58:48.082981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:34.759 [2024-11-19 11:58:48.082998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:34.759 [2024-11-19 11:58:48.083011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.759 [2024-11-19 11:58:48.083035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.759 [2024-11-19 11:58:48.083042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:34.759 [2024-11-19 11:58:48.083051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:34.759 [2024-11-19 11:58:48.083058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.759 [2024-11-19 11:58:48.083073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.759 [2024-11-19 11:58:48.083080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:34.759 [2024-11-19 11:58:48.083089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.759 [2024-11-19 11:58:48.083094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.759 [2024-11-19 11:58:48.083145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.324 ms, result 0 00:27:34.759 true 00:27:34.759 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.017 { 00:27:35.017 "name": "ftl", 00:27:35.017 "properties": [ 00:27:35.017 { 00:27:35.017 "name": "superblock_version", 00:27:35.017 "value": 5, 00:27:35.017 "read-only": true 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "name": "base_device", 00:27:35.017 "bands": [ 00:27:35.017 { 00:27:35.017 "id": 0, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 1, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 2, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 3, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 4, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 5, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 6, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 7, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 8, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 9, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 10, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 11, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 12, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 13, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 14, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 15, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 16, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "id": 17, 00:27:35.017 "state": "FREE", 00:27:35.017 "validity": 0.0 00:27:35.017 } 00:27:35.017 ], 00:27:35.017 "read-only": true 00:27:35.017 }, 00:27:35.017 { 00:27:35.017 "name": "cache_device", 00:27:35.017 "type": "bdev", 00:27:35.017 "chunks": [ 00:27:35.017 { 00:27:35.017 "id": 0, 00:27:35.018 "state": "INACTIVE", 00:27:35.018 "utilization": 0.0 00:27:35.018 }, 00:27:35.018 { 00:27:35.018 "id": 1, 00:27:35.018 "state": "CLOSED", 00:27:35.018 "utilization": 1.0 00:27:35.018 }, 00:27:35.018 { 00:27:35.018 "id": 2, 00:27:35.018 "state": "CLOSED", 00:27:35.018 "utilization": 1.0 00:27:35.018 }, 00:27:35.018 { 00:27:35.018 "id": 3, 00:27:35.018 "state": "OPEN", 00:27:35.018 "utilization": 0.001953125 00:27:35.018 }, 00:27:35.018 { 00:27:35.018 "id": 4, 00:27:35.018 "state": "OPEN", 00:27:35.018 "utilization": 0.0 00:27:35.018 } 00:27:35.018 ], 00:27:35.018 "read-only": true 00:27:35.018 }, 00:27:35.018 { 00:27:35.018 "name": "verbose_mode", 00:27:35.018 "value": true, 00:27:35.018 "unit": "", 00:27:35.018 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.018 }, 00:27:35.018 { 00:27:35.018 "name": "prep_upgrade_on_shutdown", 00:27:35.018 "value": false, 00:27:35.018 "unit": "", 00:27:35.018 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.018 } 00:27:35.018 ] 00:27:35.018 } 00:27:35.018 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:35.276 [2024-11-19 11:58:48.491156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.276 [2024-11-19 11:58:48.491191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.276 [2024-11-19 11:58:48.491200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:35.276 [2024-11-19 11:58:48.491206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.276 [2024-11-19 11:58:48.491224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.276 [2024-11-19 11:58:48.491231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.276 [2024-11-19 11:58:48.491237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:35.276 [2024-11-19 11:58:48.491242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.276 [2024-11-19 11:58:48.491259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.276 [2024-11-19 11:58:48.491265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.276 [2024-11-19 11:58:48.491271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.276 [2024-11-19 11:58:48.491276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.276 [2024-11-19 11:58:48.491322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:27:35.276 true 00:27:35.276 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:35.276 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:35.276 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.534 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:35.534 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:35.534 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.793 [2024-11-19 11:58:48.951534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.793 [2024-11-19 11:58:48.951584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.793 [2024-11-19 11:58:48.951594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:35.793 [2024-11-19 11:58:48.951601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.793 [2024-11-19 11:58:48.951620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.793 [2024-11-19 11:58:48.951626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.793 [2024-11-19 11:58:48.951632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.793 [2024-11-19 11:58:48.951638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.793 [2024-11-19 11:58:48.951653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.793 [2024-11-19 11:58:48.951659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.793 [2024-11-19 11:58:48.951665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.793 [2024-11-19 11:58:48.951670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.793 [2024-11-19 11:58:48.951714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:27:35.793 true 00:27:35.793 11:58:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.793 { 00:27:35.793 "name": "ftl", 00:27:35.793 "properties": [ 00:27:35.793 { 00:27:35.793 "name": "superblock_version", 00:27:35.793 "value": 5, 00:27:35.793 "read-only": true 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "name": "base_device", 00:27:35.793 "bands": [ 00:27:35.793 { 00:27:35.793 "id": 0, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 1, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 2, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 3, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 4, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 5, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 6, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 7, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 8, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 9, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 10, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 11, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 12, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 13, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 14, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 15, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 16, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 17, 00:27:35.793 "state": "FREE", 00:27:35.793 "validity": 0.0 00:27:35.793 } 00:27:35.793 ], 00:27:35.793 "read-only": true 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "name": "cache_device", 00:27:35.793 "type": "bdev", 00:27:35.793 "chunks": [ 00:27:35.793 { 00:27:35.793 "id": 0, 00:27:35.793 "state": "INACTIVE", 00:27:35.793 "utilization": 0.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 1, 00:27:35.793 "state": "CLOSED", 00:27:35.793 "utilization": 1.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 2, 00:27:35.793 "state": "CLOSED", 00:27:35.793 "utilization": 1.0 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 3, 00:27:35.793 "state": "OPEN", 00:27:35.793 "utilization": 0.001953125 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "id": 4, 00:27:35.793 "state": "OPEN", 00:27:35.793 "utilization": 0.0 00:27:35.793 } 00:27:35.793 ], 00:27:35.793 "read-only": true 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "name": "verbose_mode", 00:27:35.793 "value": true, 00:27:35.793 "unit": "", 00:27:35.793 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.793 }, 00:27:35.793 { 00:27:35.793 "name": "prep_upgrade_on_shutdown", 00:27:35.793 "value": true, 00:27:35.793 "unit": "", 00:27:35.793 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.793 } 00:27:35.793 ] 00:27:35.793 } 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91820 ]] 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91820 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91820 ']' 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91820 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91820 00:27:35.793 killing process with pid 91820 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91820' 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91820 00:27:35.793 11:58:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91820 00:27:36.051 [2024-11-19 11:58:49.280430] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:36.051 [2024-11-19 11:58:49.283726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.051 [2024-11-19 11:58:49.283754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:36.051 [2024-11-19 11:58:49.283764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:36.051 [2024-11-19 11:58:49.283770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.051 [2024-11-19 11:58:49.283788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:36.051 [2024-11-19 11:58:49.284159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.051 [2024-11-19 11:58:49.284170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:36.051 [2024-11-19 11:58:49.284177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:27:36.051 [2024-11-19 11:58:49.284183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.173 [2024-11-19 11:58:56.145145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.173 [2024-11-19 11:58:56.145209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:44.173 [2024-11-19 11:58:56.145224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6860.930 ms 00:27:44.173 [2024-11-19 11:58:56.145236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.173 [2024-11-19 11:58:56.146310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.173 [2024-11-19 11:58:56.146335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:44.173 [2024-11-19 11:58:56.146344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.058 ms 00:27:44.173 [2024-11-19 11:58:56.146358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.173 [2024-11-19 11:58:56.147499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.173 [2024-11-19 11:58:56.147638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:44.173 [2024-11-19 11:58:56.147653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.116 ms 00:27:44.173 [2024-11-19 11:58:56.147665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.173 [2024-11-19 11:58:56.148994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.173 [2024-11-19 11:58:56.149020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:44.173 [2024-11-19 11:58:56.149029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.288 ms 00:27:44.173 [2024-11-19 11:58:56.149037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.150610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.150640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:44.174 [2024-11-19 11:58:56.150649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.546 ms 00:27:44.174 [2024-11-19 11:58:56.150656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.150710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.150724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:44.174 [2024-11-19 11:58:56.150735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:44.174 [2024-11-19 11:58:56.150743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.151870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.151900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:44.174 [2024-11-19 11:58:56.151908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.112 ms 00:27:44.174 [2024-11-19 11:58:56.151915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.152929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.152958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:44.174 [2024-11-19 11:58:56.152967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.987 ms 00:27:44.174 [2024-11-19 11:58:56.152973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.153810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.153838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:44.174 [2024-11-19 11:58:56.153846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.810 ms 00:27:44.174 [2024-11-19 11:58:56.153853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.154773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.154801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:44.174 [2024-11-19 11:58:56.154810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.867 ms 00:27:44.174 [2024-11-19 11:58:56.154816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.154843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:44.174 [2024-11-19 11:58:56.154857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:44.174 [2024-11-19 11:58:56.154866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:44.174 [2024-11-19 11:58:56.154874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:44.174 [2024-11-19 11:58:56.154881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:44.174 [2024-11-19 11:58:56.154992] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:44.174 [2024-11-19 11:58:56.155005] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9d1d4bba-6ce9-44a6-b3fc-6c72ff1f02c6 00:27:44.174 [2024-11-19 11:58:56.155013] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:44.174 [2024-11-19 11:58:56.155020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:44.174 [2024-11-19 11:58:56.155026] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:44.174 [2024-11-19 11:58:56.155034] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:44.174 [2024-11-19 11:58:56.155040] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:44.174 [2024-11-19 11:58:56.155051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:44.174 [2024-11-19 11:58:56.155058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:44.174 [2024-11-19 11:58:56.155064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:44.174 [2024-11-19 11:58:56.155070] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:44.174 [2024-11-19 11:58:56.155078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.155085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:44.174 [2024-11-19 11:58:56.155093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:27:44.174 [2024-11-19 11:58:56.155100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.156467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.156487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:44.174 [2024-11-19 11:58:56.156496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.350 ms 00:27:44.174 [2024-11-19 11:58:56.156507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.156581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.174 [2024-11-19 11:58:56.156588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:44.174 [2024-11-19 11:58:56.156596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:44.174 [2024-11-19 11:58:56.156603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.161392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.161538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:44.174 [2024-11-19 11:58:56.161558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.161565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.161590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.161597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:44.174 [2024-11-19 11:58:56.161605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.161618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.161669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.161682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:44.174 [2024-11-19 11:58:56.161690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.161699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.161715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.161722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:44.174 [2024-11-19 11:58:56.161730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.161737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.169985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.170021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:44.174 [2024-11-19 11:58:56.170032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.170044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.176800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.176836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:44.174 [2024-11-19 11:58:56.176845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.176853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.176907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.176917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:44.174 [2024-11-19 11:58:56.176924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.176932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.176965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.176974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:44.174 [2024-11-19 11:58:56.176982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.176989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.177056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.177065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:44.174 [2024-11-19 11:58:56.177073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.174 [2024-11-19 11:58:56.177080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.174 [2024-11-19 11:58:56.177106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.174 [2024-11-19 11:58:56.177117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:44.175 [2024-11-19 11:58:56.177124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.175 [2024-11-19 11:58:56.177132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.175 [2024-11-19 11:58:56.177169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.175 [2024-11-19 11:58:56.177177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:44.175 [2024-11-19 11:58:56.177185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.175 [2024-11-19 11:58:56.177193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.175 [2024-11-19 11:58:56.177236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:44.175 [2024-11-19 11:58:56.177245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:44.175 [2024-11-19 11:58:56.177253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:44.175 [2024-11-19 11:58:56.177260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.175 [2024-11-19 11:58:56.177365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 6893.602 ms, result 0 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92295 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92295 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92295 ']' 00:27:50.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:50.795 11:59:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:50.796 11:59:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.796 11:59:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:50.796 11:59:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:50.796 [2024-11-19 11:59:03.853399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:50.796 [2024-11-19 11:59:03.853729] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92295 ] 00:27:50.796 [2024-11-19 11:59:03.992217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.796 [2024-11-19 11:59:04.027425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.055 [2024-11-19 11:59:04.294424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:51.055 [2024-11-19 11:59:04.294488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:51.055 [2024-11-19 11:59:04.437071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.437114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:51.055 [2024-11-19 11:59:04.437128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.055 [2024-11-19 11:59:04.437136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.055 [2024-11-19 11:59:04.437181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.437194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:51.055 [2024-11-19 11:59:04.437202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:51.055 [2024-11-19 11:59:04.437209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.055 [2024-11-19 11:59:04.437229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:51.055 [2024-11-19 11:59:04.437577] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:51.055 [2024-11-19 11:59:04.437627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.437647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:51.055 [2024-11-19 11:59:04.437670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:27:51.055 [2024-11-19 11:59:04.437688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.055 [2024-11-19 11:59:04.438686] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:51.055 [2024-11-19 11:59:04.441004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.441122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:51.055 [2024-11-19 11:59:04.441180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.320 ms 00:27:51.055 [2024-11-19 11:59:04.441195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.055 [2024-11-19 11:59:04.441240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.441249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:51.055 [2024-11-19 11:59:04.441261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:51.055 [2024-11-19 11:59:04.441268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.055 [2024-11-19 11:59:04.445749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.445779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:51.055 [2024-11-19 11:59:04.445789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.419 ms 00:27:51.055 [2024-11-19 11:59:04.445796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.055 [2024-11-19 11:59:04.445837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.055 [2024-11-19 11:59:04.445846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:51.055 [2024-11-19 11:59:04.445854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:51.056 [2024-11-19 11:59:04.445860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.056 [2024-11-19 11:59:04.445904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.056 [2024-11-19 11:59:04.445916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:51.056 [2024-11-19 11:59:04.445924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:51.056 [2024-11-19 11:59:04.445934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.056 [2024-11-19 11:59:04.445953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:51.056 [2024-11-19 11:59:04.447192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.056 [2024-11-19 11:59:04.447218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:51.056 [2024-11-19 11:59:04.447226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.243 ms 00:27:51.056 [2024-11-19 11:59:04.447234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.056 [2024-11-19 11:59:04.447260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.056 [2024-11-19 11:59:04.447268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:51.056 [2024-11-19 11:59:04.447280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.056 [2024-11-19 11:59:04.447292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.056 [2024-11-19 11:59:04.447311] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:51.056 [2024-11-19 11:59:04.447330] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:51.056 [2024-11-19 11:59:04.447370] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:51.056 [2024-11-19 11:59:04.447387] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:51.056 [2024-11-19 11:59:04.447507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:51.056 [2024-11-19 11:59:04.447519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:51.056 [2024-11-19 11:59:04.447531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:51.056 [2024-11-19 11:59:04.447555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:51.056 [2024-11-19 11:59:04.447564] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:51.056 [2024-11-19 11:59:04.447571] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:51.056 [2024-11-19 11:59:04.447578] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:51.056 [2024-11-19 11:59:04.447585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:51.056 [2024-11-19 11:59:04.447592] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:51.056 [2024-11-19 11:59:04.447601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.056 [2024-11-19 11:59:04.447608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:51.056 [2024-11-19 11:59:04.447615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:27:51.056 [2024-11-19 11:59:04.447622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.056 [2024-11-19 11:59:04.447710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.056 [2024-11-19 11:59:04.447717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:51.056 [2024-11-19 11:59:04.447725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:51.056 [2024-11-19 11:59:04.447731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.056 [2024-11-19 11:59:04.447832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:51.056 [2024-11-19 11:59:04.447878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:51.056 [2024-11-19 11:59:04.447889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:51.056 [2024-11-19 11:59:04.447898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.447909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:51.056 [2024-11-19 11:59:04.447917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.447925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:51.056 [2024-11-19 11:59:04.447933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:51.056 [2024-11-19 11:59:04.447940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:51.056 [2024-11-19 11:59:04.447948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.447956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:51.056 [2024-11-19 11:59:04.447964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:51.056 [2024-11-19 11:59:04.447971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.447979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:51.056 [2024-11-19 11:59:04.447987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:51.056 [2024-11-19 11:59:04.447997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:51.056 [2024-11-19 11:59:04.448016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:51.056 [2024-11-19 11:59:04.448024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:51.056 [2024-11-19 11:59:04.448039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:51.056 [2024-11-19 11:59:04.448047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:51.056 [2024-11-19 11:59:04.448061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:51.056 [2024-11-19 11:59:04.448069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:51.056 [2024-11-19 11:59:04.448083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:51.056 [2024-11-19 11:59:04.448091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:51.056 [2024-11-19 11:59:04.448106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:51.056 [2024-11-19 11:59:04.448113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:51.056 [2024-11-19 11:59:04.448128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:51.056 [2024-11-19 11:59:04.448137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:51.056 [2024-11-19 11:59:04.448152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:51.056 [2024-11-19 11:59:04.448175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:51.056 [2024-11-19 11:59:04.448197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:51.056 [2024-11-19 11:59:04.448204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:51.056 [2024-11-19 11:59:04.448222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:51.056 [2024-11-19 11:59:04.448230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.056 [2024-11-19 11:59:04.448248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:51.056 [2024-11-19 11:59:04.448256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:51.056 [2024-11-19 11:59:04.448264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:51.056 [2024-11-19 11:59:04.448271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:51.056 [2024-11-19 11:59:04.448277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:51.056 [2024-11-19 11:59:04.448284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:51.056 [2024-11-19 11:59:04.448292] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:51.056 [2024-11-19 11:59:04.448301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.056 [2024-11-19 11:59:04.448309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:51.056 [2024-11-19 11:59:04.448316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:51.056 [2024-11-19 11:59:04.448323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:51.056 [2024-11-19 11:59:04.448330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:51.056 [2024-11-19 11:59:04.448337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:51.056 [2024-11-19 11:59:04.448344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:51.056 [2024-11-19 11:59:04.448351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:51.056 [2024-11-19 11:59:04.448357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:51.056 [2024-11-19 11:59:04.448364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:51.056 [2024-11-19 11:59:04.448371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:51.057 [2024-11-19 11:59:04.448380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:51.057 [2024-11-19 11:59:04.448387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:51.057 [2024-11-19 11:59:04.448394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:51.057 [2024-11-19 11:59:04.448402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:51.057 [2024-11-19 11:59:04.448420] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:51.057 [2024-11-19 11:59:04.448428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.057 [2024-11-19 11:59:04.448436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.057 [2024-11-19 11:59:04.448442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:51.057 [2024-11-19 11:59:04.448450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:51.057 [2024-11-19 11:59:04.448457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:51.057 [2024-11-19 11:59:04.448468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.057 [2024-11-19 11:59:04.448475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:51.057 [2024-11-19 11:59:04.448483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.705 ms 00:27:51.057 [2024-11-19 11:59:04.448492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.057 [2024-11-19 11:59:04.448537] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:51.057 [2024-11-19 11:59:04.448547] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:53.646 [2024-11-19 11:59:06.552958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.646 [2024-11-19 11:59:06.553025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:53.646 [2024-11-19 11:59:06.553038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2104.417 ms 00:27:53.646 [2024-11-19 11:59:06.553046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.646 [2024-11-19 11:59:06.560758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.646 [2024-11-19 11:59:06.560799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:53.646 [2024-11-19 11:59:06.560811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.629 ms 00:27:53.646 [2024-11-19 11:59:06.560819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.646 [2024-11-19 11:59:06.560868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.646 [2024-11-19 11:59:06.560877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:53.646 [2024-11-19 11:59:06.560885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:53.646 [2024-11-19 11:59:06.560898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.578757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.578802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:53.647 [2024-11-19 11:59:06.578814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.817 ms 00:27:53.647 [2024-11-19 11:59:06.578822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.578863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.578871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:53.647 [2024-11-19 11:59:06.578879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:53.647 [2024-11-19 11:59:06.578886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.579218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.579234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:53.647 [2024-11-19 11:59:06.579248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:27:53.647 [2024-11-19 11:59:06.579255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.579292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.579300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:53.647 [2024-11-19 11:59:06.579311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:53.647 [2024-11-19 11:59:06.579319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.584595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.584638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:53.647 [2024-11-19 11:59:06.584651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.255 ms 00:27:53.647 [2024-11-19 11:59:06.584666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.587111] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:53.647 [2024-11-19 11:59:06.587154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:53.647 [2024-11-19 11:59:06.587168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.587186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:53.647 [2024-11-19 11:59:06.587197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.397 ms 00:27:53.647 [2024-11-19 11:59:06.587206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.592200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.592238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:53.647 [2024-11-19 11:59:06.592257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.946 ms 00:27:53.647 [2024-11-19 11:59:06.592267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.593727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.593895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:53.647 [2024-11-19 11:59:06.593915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.413 ms 00:27:53.647 [2024-11-19 11:59:06.593926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.595464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.595498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:53.647 [2024-11-19 11:59:06.595509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.498 ms 00:27:53.647 [2024-11-19 11:59:06.595519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.595927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.595945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:53.647 [2024-11-19 11:59:06.595953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:27:53.647 [2024-11-19 11:59:06.595961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.610075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.610114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:53.647 [2024-11-19 11:59:06.610128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.096 ms 00:27:53.647 [2024-11-19 11:59:06.610136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.617377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:53.647 [2024-11-19 11:59:06.618111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.618139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:53.647 [2024-11-19 11:59:06.618155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.938 ms 00:27:53.647 [2024-11-19 11:59:06.618167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.618216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.618230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:53.647 [2024-11-19 11:59:06.618242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:53.647 [2024-11-19 11:59:06.618250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.618303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.618313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:53.647 [2024-11-19 11:59:06.618322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:53.647 [2024-11-19 11:59:06.618334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.618358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.618367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:53.647 [2024-11-19 11:59:06.618375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:53.647 [2024-11-19 11:59:06.618383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.618425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:53.647 [2024-11-19 11:59:06.618436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.618445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:53.647 [2024-11-19 11:59:06.618457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:53.647 [2024-11-19 11:59:06.618465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.621090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.621213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:53.647 [2024-11-19 11:59:06.621233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.604 ms 00:27:53.647 [2024-11-19 11:59:06.621243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.647 [2024-11-19 11:59:06.621305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.647 [2024-11-19 11:59:06.621315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:53.647 [2024-11-19 11:59:06.621324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:53.648 [2024-11-19 11:59:06.621332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.648 [2024-11-19 11:59:06.622251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2184.737 ms, result 0 00:27:53.648 [2024-11-19 11:59:06.634610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.648 [2024-11-19 11:59:06.650608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:53.648 [2024-11-19 11:59:06.658700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:53.648 11:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.648 11:59:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:53.648 11:59:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:53.648 11:59:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:53.648 11:59:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:53.648 [2024-11-19 11:59:06.874787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.648 [2024-11-19 11:59:06.874929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:53.648 [2024-11-19 11:59:06.874946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:53.648 [2024-11-19 11:59:06.874954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.648 [2024-11-19 11:59:06.874985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.648 [2024-11-19 11:59:06.874994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:53.648 [2024-11-19 11:59:06.875002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:53.648 [2024-11-19 11:59:06.875009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.648 [2024-11-19 11:59:06.875032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.648 [2024-11-19 11:59:06.875040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:53.648 [2024-11-19 11:59:06.875048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:53.648 [2024-11-19 11:59:06.875055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.648 [2024-11-19 11:59:06.875111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.312 ms, result 0 00:27:53.648 true 00:27:53.648 11:59:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.906 { 00:27:53.906 "name": "ftl", 00:27:53.906 "properties": [ 00:27:53.906 { 00:27:53.906 "name": "superblock_version", 00:27:53.906 "value": 5, 00:27:53.906 "read-only": true 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "name": "base_device", 00:27:53.906 "bands": [ 00:27:53.906 { 00:27:53.906 "id": 0, 00:27:53.906 "state": "CLOSED", 00:27:53.906 "validity": 1.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 1, 00:27:53.906 "state": "CLOSED", 00:27:53.906 "validity": 1.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 2, 00:27:53.906 "state": "CLOSED", 00:27:53.906 "validity": 0.007843137254901933 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 3, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 4, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 5, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 6, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 7, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 8, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 9, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 10, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 11, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 12, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 13, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 14, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 15, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 16, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 17, 00:27:53.906 "state": "FREE", 00:27:53.906 "validity": 0.0 00:27:53.906 } 00:27:53.906 ], 00:27:53.906 "read-only": true 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "name": "cache_device", 00:27:53.906 "type": "bdev", 00:27:53.906 "chunks": [ 00:27:53.906 { 00:27:53.906 "id": 0, 00:27:53.906 "state": "INACTIVE", 00:27:53.906 "utilization": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 1, 00:27:53.906 "state": "OPEN", 00:27:53.906 "utilization": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 2, 00:27:53.906 "state": "OPEN", 00:27:53.906 "utilization": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 3, 00:27:53.906 "state": "FREE", 00:27:53.906 "utilization": 0.0 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "id": 4, 00:27:53.906 "state": "FREE", 00:27:53.906 "utilization": 0.0 00:27:53.906 } 00:27:53.906 ], 00:27:53.906 "read-only": true 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "name": "verbose_mode", 00:27:53.906 "value": true, 00:27:53.906 "unit": "", 00:27:53.906 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:53.906 }, 00:27:53.906 { 00:27:53.906 "name": "prep_upgrade_on_shutdown", 00:27:53.906 "value": false, 00:27:53.906 "unit": "", 00:27:53.906 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:53.906 } 00:27:53.906 ] 00:27:53.906 } 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.906 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:54.165 Validate MD5 checksum, iteration 1 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:54.165 11:59:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:54.424 [2024-11-19 11:59:07.575402] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:54.424 [2024-11-19 11:59:07.575523] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92345 ] 00:27:54.424 [2024-11-19 11:59:07.708843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.424 [2024-11-19 11:59:07.740026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.797  [2024-11-19T11:59:09.774Z] Copying: 681/1024 [MB] (681 MBps) [2024-11-19T11:59:10.340Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:27:56.928 00:27:56.928 11:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:56.928 11:59:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5fefc836f1be26be6e55b6f51084367a 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5fefc836f1be26be6e55b6f51084367a != \5\f\e\f\c\8\3\6\f\1\b\e\2\6\b\e\6\e\5\5\b\6\f\5\1\0\8\4\3\6\7\a ]] 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.478 Validate MD5 checksum, iteration 2 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:59.478 11:59:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:59.478 [2024-11-19 11:59:12.384148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:27:59.478 [2024-11-19 11:59:12.384386] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92401 ] 00:27:59.478 [2024-11-19 11:59:12.518549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.478 [2024-11-19 11:59:12.549825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.859  [2024-11-19T11:59:14.531Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-19T11:59:14.794Z] Copying: 1024/1024 [MB] (average 680 MBps) 00:28:01.382 00:28:01.382 11:59:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:01.382 11:59:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5841198d956047f356ad983578f76136 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5841198d956047f356ad983578f76136 != \5\8\4\1\1\9\8\d\9\5\6\0\4\7\f\3\5\6\a\d\9\8\3\5\7\8\f\7\6\1\3\6 ]] 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92295 ]] 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92295 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92452 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92452 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92452 ']' 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:03.928 11:59:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.928 [2024-11-19 11:59:16.782933] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:03.928 [2024-11-19 11:59:16.783041] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92452 ] 00:28:03.928 [2024-11-19 11:59:16.913073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92295 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:03.928 [2024-11-19 11:59:16.942934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.928 [2024-11-19 11:59:17.191832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.928 [2024-11-19 11:59:17.192051] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.928 [2024-11-19 11:59:17.329434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.329474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:03.928 [2024-11-19 11:59:17.329488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:03.928 [2024-11-19 11:59:17.329496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.329532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.329540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:03.928 [2024-11-19 11:59:17.329546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:03.928 [2024-11-19 11:59:17.329552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.329568] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:03.928 [2024-11-19 11:59:17.329736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:03.928 [2024-11-19 11:59:17.329746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.329752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:03.928 [2024-11-19 11:59:17.329759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:28:03.928 [2024-11-19 11:59:17.329765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.329957] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:03.928 [2024-11-19 11:59:17.333128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.333161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:03.928 [2024-11-19 11:59:17.333171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.173 ms 00:28:03.928 [2024-11-19 11:59:17.333180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.333933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.333954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:03.928 [2024-11-19 11:59:17.333962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:03.928 [2024-11-19 11:59:17.333968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.334176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.334185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:03.928 [2024-11-19 11:59:17.334194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:28:03.928 [2024-11-19 11:59:17.334199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.334227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.334234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:03.928 [2024-11-19 11:59:17.334241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:03.928 [2024-11-19 11:59:17.334247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.334266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.334273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:03.928 [2024-11-19 11:59:17.334282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:03.928 [2024-11-19 11:59:17.334287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.334307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:03.928 [2024-11-19 11:59:17.335173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.335265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:03.928 [2024-11-19 11:59:17.335314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:28:03.928 [2024-11-19 11:59:17.335331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.335363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.928 [2024-11-19 11:59:17.335385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:03.928 [2024-11-19 11:59:17.335401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.928 [2024-11-19 11:59:17.335532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.928 [2024-11-19 11:59:17.335641] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:03.928 [2024-11-19 11:59:17.335667] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:03.928 [2024-11-19 11:59:17.335714] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:03.928 [2024-11-19 11:59:17.335744] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:03.928 [2024-11-19 11:59:17.335843] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:03.928 [2024-11-19 11:59:17.335872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:03.928 [2024-11-19 11:59:17.335901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:03.928 [2024-11-19 11:59:17.335925] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:03.929 [2024-11-19 11:59:17.335953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:03.929 [2024-11-19 11:59:17.336015] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:03.929 [2024-11-19 11:59:17.336034] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:03.929 [2024-11-19 11:59:17.336087] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:03.929 [2024-11-19 11:59:17.336104] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:03.929 [2024-11-19 11:59:17.336119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.929 [2024-11-19 11:59:17.336155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:03.929 [2024-11-19 11:59:17.336173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:28:03.929 [2024-11-19 11:59:17.336187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.929 [2024-11-19 11:59:17.336270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.929 [2024-11-19 11:59:17.336378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:03.929 [2024-11-19 11:59:17.336397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:28:03.929 [2024-11-19 11:59:17.336423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.929 [2024-11-19 11:59:17.336523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:03.929 [2024-11-19 11:59:17.336583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:03.929 [2024-11-19 11:59:17.336612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.929 [2024-11-19 11:59:17.336627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.929 [2024-11-19 11:59:17.336646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:03.929 [2024-11-19 11:59:17.336662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:03.929 [2024-11-19 11:59:17.336676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:03.929 [2024-11-19 11:59:17.336691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:03.929 [2024-11-19 11:59:17.336706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:03.929 [2024-11-19 11:59:17.336721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.929 [2024-11-19 11:59:17.336736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:03.929 [2024-11-19 11:59:17.336782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:03.929 [2024-11-19 11:59:17.336823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.929 [2024-11-19 11:59:17.336840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:03.929 [2024-11-19 11:59:17.336881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:03.929 [2024-11-19 11:59:17.336898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.929 [2024-11-19 11:59:17.336918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:03.929 [2024-11-19 11:59:17.336933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:03.929 [2024-11-19 11:59:17.336947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.929 [2024-11-19 11:59:17.336962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:03.929 [2024-11-19 11:59:17.336976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:03.929 [2024-11-19 11:59:17.337017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.929 [2024-11-19 11:59:17.337033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:03.929 [2024-11-19 11:59:17.337048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:03.929 [2024-11-19 11:59:17.337063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.929 [2024-11-19 11:59:17.337077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:03.929 [2024-11-19 11:59:17.337091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:03.929 [2024-11-19 11:59:17.337105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.929 [2024-11-19 11:59:17.337119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:03.929 [2024-11-19 11:59:17.337154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:03.929 [2024-11-19 11:59:17.337170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.929 [2024-11-19 11:59:17.337184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:04.191 [2024-11-19 11:59:17.337228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:04.191 [2024-11-19 11:59:17.337236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.191 [2024-11-19 11:59:17.337241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:04.191 [2024-11-19 11:59:17.337247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:04.191 [2024-11-19 11:59:17.337252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.191 [2024-11-19 11:59:17.337257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:04.191 [2024-11-19 11:59:17.337263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:04.191 [2024-11-19 11:59:17.337268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.191 [2024-11-19 11:59:17.337273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:04.191 [2024-11-19 11:59:17.337279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:04.191 [2024-11-19 11:59:17.337284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.191 [2024-11-19 11:59:17.337289] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:04.191 [2024-11-19 11:59:17.337296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:04.191 [2024-11-19 11:59:17.337304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:04.191 [2024-11-19 11:59:17.337310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.191 [2024-11-19 11:59:17.337315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:04.191 [2024-11-19 11:59:17.337322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:04.191 [2024-11-19 11:59:17.337327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:04.191 [2024-11-19 11:59:17.337332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:04.191 [2024-11-19 11:59:17.337337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:04.191 [2024-11-19 11:59:17.337342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:04.191 [2024-11-19 11:59:17.337349] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:04.191 [2024-11-19 11:59:17.337356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:04.191 [2024-11-19 11:59:17.337368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:04.191 [2024-11-19 11:59:17.337385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:04.191 [2024-11-19 11:59:17.337390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:04.191 [2024-11-19 11:59:17.337396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:04.191 [2024-11-19 11:59:17.337401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:04.191 [2024-11-19 11:59:17.337454] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:04.191 [2024-11-19 11:59:17.337460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.191 [2024-11-19 11:59:17.337467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.192 [2024-11-19 11:59:17.337473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:04.192 [2024-11-19 11:59:17.337483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:04.192 [2024-11-19 11:59:17.337489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:04.192 [2024-11-19 11:59:17.337495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.337501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:04.192 [2024-11-19 11:59:17.337506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.028 ms 00:28:04.192 [2024-11-19 11:59:17.337512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.343672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.343751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:04.192 [2024-11-19 11:59:17.343796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.116 ms 00:28:04.192 [2024-11-19 11:59:17.343813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.343851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.343867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:04.192 [2024-11-19 11:59:17.343884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:04.192 [2024-11-19 11:59:17.343899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.356729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.356853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:04.192 [2024-11-19 11:59:17.356912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.786 ms 00:28:04.192 [2024-11-19 11:59:17.356962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.357006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.357074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:04.192 [2024-11-19 11:59:17.357113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.192 [2024-11-19 11:59:17.357128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.357223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.357246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:04.192 [2024-11-19 11:59:17.357262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:04.192 [2024-11-19 11:59:17.357317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.357373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.357392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:04.192 [2024-11-19 11:59:17.357418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:04.192 [2024-11-19 11:59:17.357434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.362436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.362537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:04.192 [2024-11-19 11:59:17.362595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.974 ms 00:28:04.192 [2024-11-19 11:59:17.362617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.362709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.362757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:04.192 [2024-11-19 11:59:17.362799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:04.192 [2024-11-19 11:59:17.362817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.366144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.366253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:04.192 [2024-11-19 11:59:17.366304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.296 ms 00:28:04.192 [2024-11-19 11:59:17.366326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.367460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.367562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:04.192 [2024-11-19 11:59:17.367724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:28:04.192 [2024-11-19 11:59:17.367752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.381164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.381281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:04.192 [2024-11-19 11:59:17.381322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.349 ms 00:28:04.192 [2024-11-19 11:59:17.381340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.381454] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:04.192 [2024-11-19 11:59:17.381575] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:04.192 [2024-11-19 11:59:17.381659] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:04.192 [2024-11-19 11:59:17.381761] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:04.192 [2024-11-19 11:59:17.381807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.381845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:04.192 [2024-11-19 11:59:17.381863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:28:04.192 [2024-11-19 11:59:17.381878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.381927] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:04.192 [2024-11-19 11:59:17.381959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.381974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:04.192 [2024-11-19 11:59:17.381990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:04.192 [2024-11-19 11:59:17.382005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.383955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.384053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:04.192 [2024-11-19 11:59:17.384094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.900 ms 00:28:04.192 [2024-11-19 11:59:17.384113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.384627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.384701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:04.192 [2024-11-19 11:59:17.384739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:04.192 [2024-11-19 11:59:17.384756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.192 [2024-11-19 11:59:17.384808] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:04.192 [2024-11-19 11:59:17.384945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.192 [2024-11-19 11:59:17.384967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:04.192 [2024-11-19 11:59:17.385024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:28:04.192 [2024-11-19 11:59:17.385039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.454 [2024-11-19 11:59:17.852796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.454 [2024-11-19 11:59:17.852965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:04.454 [2024-11-19 11:59:17.853054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 467.516 ms 00:28:04.454 [2024-11-19 11:59:17.853081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.454 [2024-11-19 11:59:17.854509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.455 [2024-11-19 11:59:17.854617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:04.455 [2024-11-19 11:59:17.854676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.015 ms 00:28:04.455 [2024-11-19 11:59:17.854699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.455 [2024-11-19 11:59:17.855282] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:04.455 [2024-11-19 11:59:17.855439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.455 [2024-11-19 11:59:17.855453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:04.455 [2024-11-19 11:59:17.855463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.704 ms 00:28:04.455 [2024-11-19 11:59:17.855471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.455 [2024-11-19 11:59:17.855508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.455 [2024-11-19 11:59:17.855529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:04.455 [2024-11-19 11:59:17.855537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.455 [2024-11-19 11:59:17.855544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.455 [2024-11-19 11:59:17.855584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 470.771 ms, result 0 00:28:04.455 [2024-11-19 11:59:17.855628] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:04.455 [2024-11-19 11:59:17.855713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.455 [2024-11-19 11:59:17.855723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:04.455 [2024-11-19 11:59:17.855731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:28:04.455 [2024-11-19 11:59:17.855737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.300588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.300796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:05.021 [2024-11-19 11:59:18.300824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 444.499 ms 00:28:05.021 [2024-11-19 11:59:18.300836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.302201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.302237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:05.021 [2024-11-19 11:59:18.302248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.925 ms 00:28:05.021 [2024-11-19 11:59:18.302258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.302580] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:05.021 [2024-11-19 11:59:18.302605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.302613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:05.021 [2024-11-19 11:59:18.302622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:28:05.021 [2024-11-19 11:59:18.302630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.302655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.302664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:05.021 [2024-11-19 11:59:18.302672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:05.021 [2024-11-19 11:59:18.302678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.302713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 447.088 ms, result 0 00:28:05.021 [2024-11-19 11:59:18.302749] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:05.021 [2024-11-19 11:59:18.302759] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:05.021 [2024-11-19 11:59:18.302768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.302779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:05.021 [2024-11-19 11:59:18.302790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 917.972 ms 00:28:05.021 [2024-11-19 11:59:18.302801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.302843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.302856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:05.021 [2024-11-19 11:59:18.302872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:05.021 [2024-11-19 11:59:18.302881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.310599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:05.021 [2024-11-19 11:59:18.310700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.310710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:05.021 [2024-11-19 11:59:18.310719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.803 ms 00:28:05.021 [2024-11-19 11:59:18.310726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.311429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.311680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:05.021 [2024-11-19 11:59:18.311701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:28:05.021 [2024-11-19 11:59:18.311713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.314050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.314077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:05.021 [2024-11-19 11:59:18.314087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.307 ms 00:28:05.021 [2024-11-19 11:59:18.314095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.314137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.314146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:05.021 [2024-11-19 11:59:18.314159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:05.021 [2024-11-19 11:59:18.314166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.314263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.314277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:05.021 [2024-11-19 11:59:18.314285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:05.021 [2024-11-19 11:59:18.314292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.314312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.314320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:05.021 [2024-11-19 11:59:18.314327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:05.021 [2024-11-19 11:59:18.314335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.314360] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:05.021 [2024-11-19 11:59:18.314369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.021 [2024-11-19 11:59:18.314378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:05.021 [2024-11-19 11:59:18.314385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:05.021 [2024-11-19 11:59:18.314392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.021 [2024-11-19 11:59:18.314452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.022 [2024-11-19 11:59:18.314464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:05.022 [2024-11-19 11:59:18.314472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:05.022 [2024-11-19 11:59:18.314478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.022 [2024-11-19 11:59:18.315352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 985.544 ms, result 0 00:28:05.022 [2024-11-19 11:59:18.331062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.022 [2024-11-19 11:59:18.347054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:05.022 [2024-11-19 11:59:18.355143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:05.022 Validate MD5 checksum, iteration 1 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.022 11:59:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:05.280 [2024-11-19 11:59:18.432851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:05.280 [2024-11-19 11:59:18.433074] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92475 ] 00:28:05.280 [2024-11-19 11:59:18.567693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.280 [2024-11-19 11:59:18.599403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.655  [2024-11-19T11:59:20.633Z] Copying: 700/1024 [MB] (700 MBps) [2024-11-19T11:59:22.007Z] Copying: 1024/1024 [MB] (average 697 MBps) 00:28:08.595 00:28:08.595 11:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:08.595 11:59:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5fefc836f1be26be6e55b6f51084367a 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5fefc836f1be26be6e55b6f51084367a != \5\f\e\f\c\8\3\6\f\1\b\e\2\6\b\e\6\e\5\5\b\6\f\5\1\0\8\4\3\6\7\a ]] 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:10.493 Validate MD5 checksum, iteration 2 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:10.493 11:59:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:10.493 [2024-11-19 11:59:23.820176] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:10.493 [2024-11-19 11:59:23.820392] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92537 ] 00:28:10.750 [2024-11-19 11:59:23.952847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.750 [2024-11-19 11:59:23.984370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.124  [2024-11-19T11:59:25.793Z] Copying: 705/1024 [MB] (705 MBps) [2024-11-19T11:59:27.167Z] Copying: 1024/1024 [MB] (average 703 MBps) 00:28:13.755 00:28:13.756 11:59:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:13.756 11:59:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5841198d956047f356ad983578f76136 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5841198d956047f356ad983578f76136 != \5\8\4\1\1\9\8\d\9\5\6\0\4\7\f\3\5\6\a\d\9\8\3\5\7\8\f\7\6\1\3\6 ]] 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92452 ]] 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92452 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92452 ']' 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92452 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92452 00:28:15.666 killing process with pid 92452 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92452' 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92452 00:28:15.666 11:59:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92452 00:28:15.666 [2024-11-19 11:59:29.006973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:15.666 [2024-11-19 11:59:29.010701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.010735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:15.666 [2024-11-19 11:59:29.010745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:15.666 [2024-11-19 11:59:29.010752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.010768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:15.666 [2024-11-19 11:59:29.011129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.011142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:15.666 [2024-11-19 11:59:29.011149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.351 ms 00:28:15.666 [2024-11-19 11:59:29.011159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.011349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.011357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:15.666 [2024-11-19 11:59:29.011366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.163 ms 00:28:15.666 [2024-11-19 11:59:29.011371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.012456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.012483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:15.666 [2024-11-19 11:59:29.012491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.072 ms 00:28:15.666 [2024-11-19 11:59:29.012497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.013374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.013394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:15.666 [2024-11-19 11:59:29.013413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.852 ms 00:28:15.666 [2024-11-19 11:59:29.013421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.014742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.014769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:15.666 [2024-11-19 11:59:29.014776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.292 ms 00:28:15.666 [2024-11-19 11:59:29.014782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.015763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.015792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:15.666 [2024-11-19 11:59:29.015804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.957 ms 00:28:15.666 [2024-11-19 11:59:29.015811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.015875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.015883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:15.666 [2024-11-19 11:59:29.015890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:15.666 [2024-11-19 11:59:29.015900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.017170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.017273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:15.666 [2024-11-19 11:59:29.017284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.257 ms 00:28:15.666 [2024-11-19 11:59:29.017291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.018497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.018527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:15.666 [2024-11-19 11:59:29.018534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:28:15.666 [2024-11-19 11:59:29.018539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.019475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.019521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:15.666 [2024-11-19 11:59:29.019529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.910 ms 00:28:15.666 [2024-11-19 11:59:29.019534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.020364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.666 [2024-11-19 11:59:29.020468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:15.666 [2024-11-19 11:59:29.020480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.783 ms 00:28:15.666 [2024-11-19 11:59:29.020486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.666 [2024-11-19 11:59:29.020511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:15.666 [2024-11-19 11:59:29.020521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:15.666 [2024-11-19 11:59:29.020529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:15.666 [2024-11-19 11:59:29.020535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:15.666 [2024-11-19 11:59:29.020542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:15.666 [2024-11-19 11:59:29.020548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:15.667 [2024-11-19 11:59:29.020629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:15.667 [2024-11-19 11:59:29.020635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9d1d4bba-6ce9-44a6-b3fc-6c72ff1f02c6 00:28:15.667 [2024-11-19 11:59:29.020644] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:15.667 [2024-11-19 11:59:29.020650] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:15.667 [2024-11-19 11:59:29.020655] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:15.667 [2024-11-19 11:59:29.020661] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:15.667 [2024-11-19 11:59:29.020666] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:15.667 [2024-11-19 11:59:29.020672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:15.667 [2024-11-19 11:59:29.020678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:15.667 [2024-11-19 11:59:29.020683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:15.667 [2024-11-19 11:59:29.020688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:15.667 [2024-11-19 11:59:29.020694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.667 [2024-11-19 11:59:29.020700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:15.667 [2024-11-19 11:59:29.020706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:28:15.667 [2024-11-19 11:59:29.020712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.021912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.667 [2024-11-19 11:59:29.021936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:15.667 [2024-11-19 11:59:29.021943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.187 ms 00:28:15.667 [2024-11-19 11:59:29.021949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.022017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:15.667 [2024-11-19 11:59:29.022023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:15.667 [2024-11-19 11:59:29.022034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:15.667 [2024-11-19 11:59:29.022041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.026436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.026459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:15.667 [2024-11-19 11:59:29.026466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.026472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.026492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.026499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:15.667 [2024-11-19 11:59:29.026504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.026516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.026569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.026577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:15.667 [2024-11-19 11:59:29.026584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.026589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.026603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.026609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:15.667 [2024-11-19 11:59:29.026614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.026620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.033905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.033936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:15.667 [2024-11-19 11:59:29.033944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.033951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.039887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:15.667 [2024-11-19 11:59:29.040091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:15.667 [2024-11-19 11:59:29.040166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:15.667 [2024-11-19 11:59:29.040209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:15.667 [2024-11-19 11:59:29.040288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:15.667 [2024-11-19 11:59:29.040330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:15.667 [2024-11-19 11:59:29.040378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:15.667 [2024-11-19 11:59:29.040440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:15.667 [2024-11-19 11:59:29.040446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:15.667 [2024-11-19 11:59:29.040452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:15.667 [2024-11-19 11:59:29.040546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 29.824 ms, result 0 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:15.930 Remove shared memory files 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92295 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:15.930 ************************************ 00:28:15.930 END TEST ftl_upgrade_shutdown 00:28:15.930 ************************************ 00:28:15.930 00:28:15.930 real 1m9.903s 00:28:15.930 user 1m34.734s 00:28:15.930 sys 0m17.589s 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.930 11:59:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.930 11:59:29 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:15.930 11:59:29 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:15.930 11:59:29 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:15.930 11:59:29 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.930 11:59:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:15.930 ************************************ 00:28:15.930 START TEST ftl_restore_fast 00:28:15.930 ************************************ 00:28:15.931 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:15.931 * Looking for test storage... 00:28:15.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.931 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:15.931 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:15.931 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.197 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:16.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.198 --rc genhtml_branch_coverage=1 00:28:16.198 --rc genhtml_function_coverage=1 00:28:16.198 --rc genhtml_legend=1 00:28:16.198 --rc geninfo_all_blocks=1 00:28:16.198 --rc geninfo_unexecuted_blocks=1 00:28:16.198 00:28:16.198 ' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.198 --rc genhtml_branch_coverage=1 00:28:16.198 --rc genhtml_function_coverage=1 00:28:16.198 --rc genhtml_legend=1 00:28:16.198 --rc geninfo_all_blocks=1 00:28:16.198 --rc geninfo_unexecuted_blocks=1 00:28:16.198 00:28:16.198 ' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.198 --rc genhtml_branch_coverage=1 00:28:16.198 --rc genhtml_function_coverage=1 00:28:16.198 --rc genhtml_legend=1 00:28:16.198 --rc geninfo_all_blocks=1 00:28:16.198 --rc geninfo_unexecuted_blocks=1 00:28:16.198 00:28:16.198 ' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.198 --rc genhtml_branch_coverage=1 00:28:16.198 --rc genhtml_function_coverage=1 00:28:16.198 --rc genhtml_legend=1 00:28:16.198 --rc geninfo_all_blocks=1 00:28:16.198 --rc geninfo_unexecuted_blocks=1 00:28:16.198 00:28:16.198 ' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:16.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.tnP1iJhDfU 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92680 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92680 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92680 ']' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.198 11:59:29 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:16.198 [2024-11-19 11:59:29.464369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:16.198 [2024-11-19 11:59:29.464496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92680 ] 00:28:16.198 [2024-11-19 11:59:29.597638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.460 [2024-11-19 11:59:29.633020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:17.032 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:17.292 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:17.554 { 00:28:17.554 "name": "nvme0n1", 00:28:17.554 "aliases": [ 00:28:17.554 "6ac475c4-c54a-4755-ad5f-f1deba036029" 00:28:17.554 ], 00:28:17.554 "product_name": "NVMe disk", 00:28:17.554 "block_size": 4096, 00:28:17.554 "num_blocks": 1310720, 00:28:17.554 "uuid": "6ac475c4-c54a-4755-ad5f-f1deba036029", 00:28:17.554 "numa_id": -1, 00:28:17.554 "assigned_rate_limits": { 00:28:17.554 "rw_ios_per_sec": 0, 00:28:17.554 "rw_mbytes_per_sec": 0, 00:28:17.554 "r_mbytes_per_sec": 0, 00:28:17.554 "w_mbytes_per_sec": 0 00:28:17.554 }, 00:28:17.554 "claimed": true, 00:28:17.554 "claim_type": "read_many_write_one", 00:28:17.554 "zoned": false, 00:28:17.554 "supported_io_types": { 00:28:17.554 "read": true, 00:28:17.554 "write": true, 00:28:17.554 "unmap": true, 00:28:17.554 "flush": true, 00:28:17.554 "reset": true, 00:28:17.554 "nvme_admin": true, 00:28:17.554 "nvme_io": true, 00:28:17.554 "nvme_io_md": false, 00:28:17.554 "write_zeroes": true, 00:28:17.554 "zcopy": false, 00:28:17.554 "get_zone_info": false, 00:28:17.554 "zone_management": false, 00:28:17.554 "zone_append": false, 00:28:17.554 "compare": true, 00:28:17.554 "compare_and_write": false, 00:28:17.554 "abort": true, 00:28:17.554 "seek_hole": false, 00:28:17.554 "seek_data": false, 00:28:17.554 "copy": true, 00:28:17.554 "nvme_iov_md": false 00:28:17.554 }, 00:28:17.554 "driver_specific": { 00:28:17.554 "nvme": [ 00:28:17.554 { 00:28:17.554 "pci_address": "0000:00:11.0", 00:28:17.554 "trid": { 00:28:17.554 "trtype": "PCIe", 00:28:17.554 "traddr": "0000:00:11.0" 00:28:17.554 }, 00:28:17.554 "ctrlr_data": { 00:28:17.554 "cntlid": 0, 00:28:17.554 "vendor_id": "0x1b36", 00:28:17.554 "model_number": "QEMU NVMe Ctrl", 00:28:17.554 "serial_number": "12341", 00:28:17.554 "firmware_revision": "8.0.0", 00:28:17.554 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:17.554 "oacs": { 00:28:17.554 "security": 0, 00:28:17.554 "format": 1, 00:28:17.554 "firmware": 0, 00:28:17.554 "ns_manage": 1 00:28:17.554 }, 00:28:17.554 "multi_ctrlr": false, 00:28:17.554 "ana_reporting": false 00:28:17.554 }, 00:28:17.554 "vs": { 00:28:17.554 "nvme_version": "1.4" 00:28:17.554 }, 00:28:17.554 "ns_data": { 00:28:17.554 "id": 1, 00:28:17.554 "can_share": false 00:28:17.554 } 00:28:17.554 } 00:28:17.554 ], 00:28:17.554 "mp_policy": "active_passive" 00:28:17.554 } 00:28:17.554 } 00:28:17.554 ]' 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:17.554 11:59:30 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:17.814 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=ab89f42e-bf72-47a9-9b79-a5812d1f990e 00:28:17.814 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:17.814 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab89f42e-bf72-47a9-9b79-a5812d1f990e 00:28:18.071 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=e8231b0d-a0ef-4485-b02c-0a678fe3e601 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e8231b0d-a0ef-4485-b02c-0a678fe3e601 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:18.329 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.588 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:18.588 { 00:28:18.588 "name": "1f66844b-e518-4866-90f8-2cac783b45cf", 00:28:18.588 "aliases": [ 00:28:18.588 "lvs/nvme0n1p0" 00:28:18.588 ], 00:28:18.588 "product_name": "Logical Volume", 00:28:18.588 "block_size": 4096, 00:28:18.588 "num_blocks": 26476544, 00:28:18.588 "uuid": "1f66844b-e518-4866-90f8-2cac783b45cf", 00:28:18.588 "assigned_rate_limits": { 00:28:18.588 "rw_ios_per_sec": 0, 00:28:18.588 "rw_mbytes_per_sec": 0, 00:28:18.588 "r_mbytes_per_sec": 0, 00:28:18.588 "w_mbytes_per_sec": 0 00:28:18.588 }, 00:28:18.589 "claimed": false, 00:28:18.589 "zoned": false, 00:28:18.589 "supported_io_types": { 00:28:18.589 "read": true, 00:28:18.589 "write": true, 00:28:18.589 "unmap": true, 00:28:18.589 "flush": false, 00:28:18.589 "reset": true, 00:28:18.589 "nvme_admin": false, 00:28:18.589 "nvme_io": false, 00:28:18.589 "nvme_io_md": false, 00:28:18.589 "write_zeroes": true, 00:28:18.589 "zcopy": false, 00:28:18.589 "get_zone_info": false, 00:28:18.589 "zone_management": false, 00:28:18.589 "zone_append": false, 00:28:18.589 "compare": false, 00:28:18.589 "compare_and_write": false, 00:28:18.589 "abort": false, 00:28:18.589 "seek_hole": true, 00:28:18.589 "seek_data": true, 00:28:18.589 "copy": false, 00:28:18.589 "nvme_iov_md": false 00:28:18.589 }, 00:28:18.589 "driver_specific": { 00:28:18.589 "lvol": { 00:28:18.589 "lvol_store_uuid": "e8231b0d-a0ef-4485-b02c-0a678fe3e601", 00:28:18.589 "base_bdev": "nvme0n1", 00:28:18.589 "thin_provision": true, 00:28:18.589 "num_allocated_clusters": 0, 00:28:18.589 "snapshot": false, 00:28:18.589 "clone": false, 00:28:18.589 "esnap_clone": false 00:28:18.589 } 00:28:18.589 } 00:28:18.589 } 00:28:18.589 ]' 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:18.589 11:59:31 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:18.848 11:59:32 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:18.848 11:59:32 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:18.848 11:59:32 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.848 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=1f66844b-e518-4866-90f8-2cac783b45cf 00:28:18.848 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:18.849 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:18.849 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:18.849 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:19.146 { 00:28:19.146 "name": "1f66844b-e518-4866-90f8-2cac783b45cf", 00:28:19.146 "aliases": [ 00:28:19.146 "lvs/nvme0n1p0" 00:28:19.146 ], 00:28:19.146 "product_name": "Logical Volume", 00:28:19.146 "block_size": 4096, 00:28:19.146 "num_blocks": 26476544, 00:28:19.146 "uuid": "1f66844b-e518-4866-90f8-2cac783b45cf", 00:28:19.146 "assigned_rate_limits": { 00:28:19.146 "rw_ios_per_sec": 0, 00:28:19.146 "rw_mbytes_per_sec": 0, 00:28:19.146 "r_mbytes_per_sec": 0, 00:28:19.146 "w_mbytes_per_sec": 0 00:28:19.146 }, 00:28:19.146 "claimed": false, 00:28:19.146 "zoned": false, 00:28:19.146 "supported_io_types": { 00:28:19.146 "read": true, 00:28:19.146 "write": true, 00:28:19.146 "unmap": true, 00:28:19.146 "flush": false, 00:28:19.146 "reset": true, 00:28:19.146 "nvme_admin": false, 00:28:19.146 "nvme_io": false, 00:28:19.146 "nvme_io_md": false, 00:28:19.146 "write_zeroes": true, 00:28:19.146 "zcopy": false, 00:28:19.146 "get_zone_info": false, 00:28:19.146 "zone_management": false, 00:28:19.146 "zone_append": false, 00:28:19.146 "compare": false, 00:28:19.146 "compare_and_write": false, 00:28:19.146 "abort": false, 00:28:19.146 "seek_hole": true, 00:28:19.146 "seek_data": true, 00:28:19.146 "copy": false, 00:28:19.146 "nvme_iov_md": false 00:28:19.146 }, 00:28:19.146 "driver_specific": { 00:28:19.146 "lvol": { 00:28:19.146 "lvol_store_uuid": "e8231b0d-a0ef-4485-b02c-0a678fe3e601", 00:28:19.146 "base_bdev": "nvme0n1", 00:28:19.146 "thin_provision": true, 00:28:19.146 "num_allocated_clusters": 0, 00:28:19.146 "snapshot": false, 00:28:19.146 "clone": false, 00:28:19.146 "esnap_clone": false 00:28:19.146 } 00:28:19.146 } 00:28:19.146 } 00:28:19.146 ]' 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:19.146 11:59:32 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=1f66844b-e518-4866-90f8-2cac783b45cf 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:19.408 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1f66844b-e518-4866-90f8-2cac783b45cf 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:19.670 { 00:28:19.670 "name": "1f66844b-e518-4866-90f8-2cac783b45cf", 00:28:19.670 "aliases": [ 00:28:19.670 "lvs/nvme0n1p0" 00:28:19.670 ], 00:28:19.670 "product_name": "Logical Volume", 00:28:19.670 "block_size": 4096, 00:28:19.670 "num_blocks": 26476544, 00:28:19.670 "uuid": "1f66844b-e518-4866-90f8-2cac783b45cf", 00:28:19.670 "assigned_rate_limits": { 00:28:19.670 "rw_ios_per_sec": 0, 00:28:19.670 "rw_mbytes_per_sec": 0, 00:28:19.670 "r_mbytes_per_sec": 0, 00:28:19.670 "w_mbytes_per_sec": 0 00:28:19.670 }, 00:28:19.670 "claimed": false, 00:28:19.670 "zoned": false, 00:28:19.670 "supported_io_types": { 00:28:19.670 "read": true, 00:28:19.670 "write": true, 00:28:19.670 "unmap": true, 00:28:19.670 "flush": false, 00:28:19.670 "reset": true, 00:28:19.670 "nvme_admin": false, 00:28:19.670 "nvme_io": false, 00:28:19.670 "nvme_io_md": false, 00:28:19.670 "write_zeroes": true, 00:28:19.670 "zcopy": false, 00:28:19.670 "get_zone_info": false, 00:28:19.670 "zone_management": false, 00:28:19.670 "zone_append": false, 00:28:19.670 "compare": false, 00:28:19.670 "compare_and_write": false, 00:28:19.670 "abort": false, 00:28:19.670 "seek_hole": true, 00:28:19.670 "seek_data": true, 00:28:19.670 "copy": false, 00:28:19.670 "nvme_iov_md": false 00:28:19.670 }, 00:28:19.670 "driver_specific": { 00:28:19.670 "lvol": { 00:28:19.670 "lvol_store_uuid": "e8231b0d-a0ef-4485-b02c-0a678fe3e601", 00:28:19.670 "base_bdev": "nvme0n1", 00:28:19.670 "thin_provision": true, 00:28:19.670 "num_allocated_clusters": 0, 00:28:19.670 "snapshot": false, 00:28:19.670 "clone": false, 00:28:19.670 "esnap_clone": false 00:28:19.670 } 00:28:19.670 } 00:28:19.670 } 00:28:19.670 ]' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1f66844b-e518-4866-90f8-2cac783b45cf --l2p_dram_limit 10' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:19.670 11:59:32 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1f66844b-e518-4866-90f8-2cac783b45cf --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:19.933 [2024-11-19 11:59:33.175373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.175437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:19.933 [2024-11-19 11:59:33.175451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:19.933 [2024-11-19 11:59:33.175461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.175528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.175541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.933 [2024-11-19 11:59:33.175549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:19.933 [2024-11-19 11:59:33.175560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.175582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:19.933 [2024-11-19 11:59:33.176249] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:19.933 [2024-11-19 11:59:33.176283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.176294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.933 [2024-11-19 11:59:33.176308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:28:19.933 [2024-11-19 11:59:33.176318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.176542] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 90c3f044-fb37-4a86-8d3e-acf3f349e6a6 00:28:19.933 [2024-11-19 11:59:33.177563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.177586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:19.933 [2024-11-19 11:59:33.177598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:19.933 [2024-11-19 11:59:33.177605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.182518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.182541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.933 [2024-11-19 11:59:33.182557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:28:19.933 [2024-11-19 11:59:33.182566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.182638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.182646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.933 [2024-11-19 11:59:33.182658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:19.933 [2024-11-19 11:59:33.182666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.182724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.182734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:19.933 [2024-11-19 11:59:33.182744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:19.933 [2024-11-19 11:59:33.182751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.182773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:19.933 [2024-11-19 11:59:33.184164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.184191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.933 [2024-11-19 11:59:33.184202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:28:19.933 [2024-11-19 11:59:33.184211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.184241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.184250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:19.933 [2024-11-19 11:59:33.184258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:19.933 [2024-11-19 11:59:33.184268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.184285] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:19.933 [2024-11-19 11:59:33.184435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:19.933 [2024-11-19 11:59:33.184447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:19.933 [2024-11-19 11:59:33.184462] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:19.933 [2024-11-19 11:59:33.184471] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:19.933 [2024-11-19 11:59:33.184483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:19.933 [2024-11-19 11:59:33.184495] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:19.933 [2024-11-19 11:59:33.184506] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:19.933 [2024-11-19 11:59:33.184513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:19.933 [2024-11-19 11:59:33.184521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:19.933 [2024-11-19 11:59:33.184530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.184539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:19.933 [2024-11-19 11:59:33.184546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:28:19.933 [2024-11-19 11:59:33.184555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.184640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.933 [2024-11-19 11:59:33.184651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:19.933 [2024-11-19 11:59:33.184658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:19.933 [2024-11-19 11:59:33.184666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.933 [2024-11-19 11:59:33.184760] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:19.933 [2024-11-19 11:59:33.184776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:19.933 [2024-11-19 11:59:33.184783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.933 [2024-11-19 11:59:33.184792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.933 [2024-11-19 11:59:33.184799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:19.933 [2024-11-19 11:59:33.184808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:19.933 [2024-11-19 11:59:33.184816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:19.933 [2024-11-19 11:59:33.184825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:19.933 [2024-11-19 11:59:33.184833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:19.933 [2024-11-19 11:59:33.184842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.933 [2024-11-19 11:59:33.184849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:19.933 [2024-11-19 11:59:33.184859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:19.933 [2024-11-19 11:59:33.184866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:19.933 [2024-11-19 11:59:33.184877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:19.933 [2024-11-19 11:59:33.184885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:19.933 [2024-11-19 11:59:33.184894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.933 [2024-11-19 11:59:33.184901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:19.933 [2024-11-19 11:59:33.184911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:19.933 [2024-11-19 11:59:33.184919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.933 [2024-11-19 11:59:33.184928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:19.933 [2024-11-19 11:59:33.184935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:19.934 [2024-11-19 11:59:33.184944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.934 [2024-11-19 11:59:33.184952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:19.934 [2024-11-19 11:59:33.184961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:19.934 [2024-11-19 11:59:33.184968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.934 [2024-11-19 11:59:33.184977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:19.934 [2024-11-19 11:59:33.184984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:19.934 [2024-11-19 11:59:33.184993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.934 [2024-11-19 11:59:33.185000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:19.934 [2024-11-19 11:59:33.185011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:19.934 [2024-11-19 11:59:33.185018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:19.934 [2024-11-19 11:59:33.185029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:19.934 [2024-11-19 11:59:33.185036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:19.934 [2024-11-19 11:59:33.185045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.934 [2024-11-19 11:59:33.185053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:19.934 [2024-11-19 11:59:33.185061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:19.934 [2024-11-19 11:59:33.185068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:19.934 [2024-11-19 11:59:33.185077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:19.934 [2024-11-19 11:59:33.185084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:19.934 [2024-11-19 11:59:33.185093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.934 [2024-11-19 11:59:33.185101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:19.934 [2024-11-19 11:59:33.185110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:19.934 [2024-11-19 11:59:33.185117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.934 [2024-11-19 11:59:33.185125] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:19.934 [2024-11-19 11:59:33.185137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:19.934 [2024-11-19 11:59:33.185151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:19.934 [2024-11-19 11:59:33.185159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:19.934 [2024-11-19 11:59:33.185169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:19.934 [2024-11-19 11:59:33.185177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:19.934 [2024-11-19 11:59:33.185189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:19.934 [2024-11-19 11:59:33.185197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:19.934 [2024-11-19 11:59:33.185206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:19.934 [2024-11-19 11:59:33.185213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:19.934 [2024-11-19 11:59:33.185225] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:19.934 [2024-11-19 11:59:33.185239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:19.934 [2024-11-19 11:59:33.185257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:19.934 [2024-11-19 11:59:33.185265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:19.934 [2024-11-19 11:59:33.185272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:19.934 [2024-11-19 11:59:33.185281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:19.934 [2024-11-19 11:59:33.185287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:19.934 [2024-11-19 11:59:33.185297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:19.934 [2024-11-19 11:59:33.185304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:19.934 [2024-11-19 11:59:33.185312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:19.934 [2024-11-19 11:59:33.185319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:19.934 [2024-11-19 11:59:33.185358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:19.934 [2024-11-19 11:59:33.185368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.934 [2024-11-19 11:59:33.185384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:19.934 [2024-11-19 11:59:33.185392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:19.934 [2024-11-19 11:59:33.185400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:19.934 [2024-11-19 11:59:33.185428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.934 [2024-11-19 11:59:33.185436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:19.934 [2024-11-19 11:59:33.185447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:28:19.934 [2024-11-19 11:59:33.185454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.934 [2024-11-19 11:59:33.185492] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:19.934 [2024-11-19 11:59:33.185501] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:22.482 [2024-11-19 11:59:35.342959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.343011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:22.483 [2024-11-19 11:59:35.343029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2157.460 ms 00:28:22.483 [2024-11-19 11:59:35.343038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.351351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.351392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.483 [2024-11-19 11:59:35.351418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.233 ms 00:28:22.483 [2024-11-19 11:59:35.351427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.351526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.351535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:22.483 [2024-11-19 11:59:35.351548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:22.483 [2024-11-19 11:59:35.351556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.359275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.359311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.483 [2024-11-19 11:59:35.359322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.669 ms 00:28:22.483 [2024-11-19 11:59:35.359334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.359363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.359370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.483 [2024-11-19 11:59:35.359382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:22.483 [2024-11-19 11:59:35.359389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.359758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.359778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.483 [2024-11-19 11:59:35.359789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:22.483 [2024-11-19 11:59:35.359796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.359906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.359919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.483 [2024-11-19 11:59:35.359930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:28:22.483 [2024-11-19 11:59:35.359941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.372869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.372913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.483 [2024-11-19 11:59:35.372930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.902 ms 00:28:22.483 [2024-11-19 11:59:35.372941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.382456] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:22.483 [2024-11-19 11:59:35.385078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.385106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:22.483 [2024-11-19 11:59:35.385116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.030 ms 00:28:22.483 [2024-11-19 11:59:35.385127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.426340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.426387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:22.483 [2024-11-19 11:59:35.426399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.185 ms 00:28:22.483 [2024-11-19 11:59:35.426421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.426602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.426614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:22.483 [2024-11-19 11:59:35.426623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:28:22.483 [2024-11-19 11:59:35.426632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.429346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.429405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:22.483 [2024-11-19 11:59:35.429432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.696 ms 00:28:22.483 [2024-11-19 11:59:35.429442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.431806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.431839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:22.483 [2024-11-19 11:59:35.431849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:28:22.483 [2024-11-19 11:59:35.431859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.432151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.432167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:22.483 [2024-11-19 11:59:35.432179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:28:22.483 [2024-11-19 11:59:35.432190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.455682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.455721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:22.483 [2024-11-19 11:59:35.455732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.474 ms 00:28:22.483 [2024-11-19 11:59:35.455741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.459275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.459307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:22.483 [2024-11-19 11:59:35.459317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.477 ms 00:28:22.483 [2024-11-19 11:59:35.459327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.462031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.462062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:22.483 [2024-11-19 11:59:35.462070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:28:22.483 [2024-11-19 11:59:35.462079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.465023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.465056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:22.483 [2024-11-19 11:59:35.465066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:28:22.483 [2024-11-19 11:59:35.465078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.465114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.465126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:22.483 [2024-11-19 11:59:35.465135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:22.483 [2024-11-19 11:59:35.465145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.465206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.465218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:22.483 [2024-11-19 11:59:35.465226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:22.483 [2024-11-19 11:59:35.465236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.466118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2290.366 ms, result 0 00:28:22.483 { 00:28:22.483 "name": "ftl0", 00:28:22.483 "uuid": "90c3f044-fb37-4a86-8d3e-acf3f349e6a6" 00:28:22.483 } 00:28:22.483 11:59:35 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:22.483 11:59:35 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:22.483 11:59:35 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:22.483 11:59:35 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:22.483 [2024-11-19 11:59:35.875189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.875231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:22.483 [2024-11-19 11:59:35.875249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:22.483 [2024-11-19 11:59:35.875257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.875282] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:22.483 [2024-11-19 11:59:35.875763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.875790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:22.483 [2024-11-19 11:59:35.875800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:28:22.483 [2024-11-19 11:59:35.875809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.876061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.876073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:22.483 [2024-11-19 11:59:35.876082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:22.483 [2024-11-19 11:59:35.876092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.879324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.879346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:22.483 [2024-11-19 11:59:35.879356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:28:22.483 [2024-11-19 11:59:35.879366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.885615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.885666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:22.483 [2024-11-19 11:59:35.885678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:28:22.483 [2024-11-19 11:59:35.885688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.483 [2024-11-19 11:59:35.887786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.483 [2024-11-19 11:59:35.887823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:22.484 [2024-11-19 11:59:35.887832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:28:22.484 [2024-11-19 11:59:35.887841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.484 [2024-11-19 11:59:35.891092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.484 [2024-11-19 11:59:35.891131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:22.484 [2024-11-19 11:59:35.891140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:28:22.484 [2024-11-19 11:59:35.891149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.484 [2024-11-19 11:59:35.891267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.484 [2024-11-19 11:59:35.891277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:22.484 [2024-11-19 11:59:35.891286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:22.484 [2024-11-19 11:59:35.891295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.747 [2024-11-19 11:59:35.892869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.747 [2024-11-19 11:59:35.892900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:22.747 [2024-11-19 11:59:35.892908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:28:22.747 [2024-11-19 11:59:35.892917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.747 [2024-11-19 11:59:35.894057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.747 [2024-11-19 11:59:35.894101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:22.747 [2024-11-19 11:59:35.894110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:28:22.747 [2024-11-19 11:59:35.894118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.747 [2024-11-19 11:59:35.895083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.747 [2024-11-19 11:59:35.895113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:22.747 [2024-11-19 11:59:35.895121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:28:22.747 [2024-11-19 11:59:35.895129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.747 [2024-11-19 11:59:35.896022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.747 [2024-11-19 11:59:35.896051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:22.747 [2024-11-19 11:59:35.896065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:28:22.747 [2024-11-19 11:59:35.896076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.747 [2024-11-19 11:59:35.896104] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:22.747 [2024-11-19 11:59:35.896119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:22.747 [2024-11-19 11:59:35.896386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:22.748 [2024-11-19 11:59:35.896964] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:22.748 [2024-11-19 11:59:35.896972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90c3f044-fb37-4a86-8d3e-acf3f349e6a6 00:28:22.748 [2024-11-19 11:59:35.896981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:22.748 [2024-11-19 11:59:35.896988] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:22.748 [2024-11-19 11:59:35.896998] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:22.748 [2024-11-19 11:59:35.897005] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:22.748 [2024-11-19 11:59:35.897013] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:22.748 [2024-11-19 11:59:35.897021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:22.748 [2024-11-19 11:59:35.897029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:22.748 [2024-11-19 11:59:35.897035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:22.748 [2024-11-19 11:59:35.897043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:22.748 [2024-11-19 11:59:35.897050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.748 [2024-11-19 11:59:35.897058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:22.748 [2024-11-19 11:59:35.897068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:28:22.748 [2024-11-19 11:59:35.897077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.748 [2024-11-19 11:59:35.898432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.748 [2024-11-19 11:59:35.898453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:22.748 [2024-11-19 11:59:35.898462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:28:22.748 [2024-11-19 11:59:35.898471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.748 [2024-11-19 11:59:35.898546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.748 [2024-11-19 11:59:35.898556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:22.748 [2024-11-19 11:59:35.898563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:22.748 [2024-11-19 11:59:35.898571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.748 [2024-11-19 11:59:35.903629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.748 [2024-11-19 11:59:35.903664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.748 [2024-11-19 11:59:35.903678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.748 [2024-11-19 11:59:35.903687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.903741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.903754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.749 [2024-11-19 11:59:35.903761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.903770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.903833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.903846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.749 [2024-11-19 11:59:35.903853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.903862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.903879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.903894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.749 [2024-11-19 11:59:35.903902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.903914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.912207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.912248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.749 [2024-11-19 11:59:35.912258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.912269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.919524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.919567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.749 [2024-11-19 11:59:35.919577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.919589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.919631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.919643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.749 [2024-11-19 11:59:35.919651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.919660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.919711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.919726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.749 [2024-11-19 11:59:35.919734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.919745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.919806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.919818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.749 [2024-11-19 11:59:35.919825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.919834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.919862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.919873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:22.749 [2024-11-19 11:59:35.919880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.919891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.919930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.919942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.749 [2024-11-19 11:59:35.919949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.919958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.920001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.749 [2024-11-19 11:59:35.920015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.749 [2024-11-19 11:59:35.920026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.749 [2024-11-19 11:59:35.920036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.749 [2024-11-19 11:59:35.920156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.938 ms, result 0 00:28:22.749 true 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92680 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92680 ']' 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92680 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92680 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:22.749 killing process with pid 92680 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92680' 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92680 00:28:22.749 11:59:35 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92680 00:28:29.351 11:59:42 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:33.555 262144+0 records in 00:28:33.555 262144+0 records out 00:28:33.555 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.33723 s, 248 MB/s 00:28:33.555 11:59:46 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:35.454 11:59:48 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:35.454 [2024-11-19 11:59:48.509727] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:28:35.454 [2024-11-19 11:59:48.509815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92876 ] 00:28:35.454 [2024-11-19 11:59:48.643350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.454 [2024-11-19 11:59:48.676574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.454 [2024-11-19 11:59:48.764280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.454 [2024-11-19 11:59:48.764348] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.716 [2024-11-19 11:59:48.919541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.716 [2024-11-19 11:59:48.919589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:35.716 [2024-11-19 11:59:48.919605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.716 [2024-11-19 11:59:48.919619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.716 [2024-11-19 11:59:48.919661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.716 [2024-11-19 11:59:48.919670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.716 [2024-11-19 11:59:48.919679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:35.716 [2024-11-19 11:59:48.919691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.716 [2024-11-19 11:59:48.919713] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:35.716 [2024-11-19 11:59:48.920252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:35.716 [2024-11-19 11:59:48.920290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.716 [2024-11-19 11:59:48.920300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.716 [2024-11-19 11:59:48.920312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:28:35.716 [2024-11-19 11:59:48.920319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.716 [2024-11-19 11:59:48.921367] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:35.716 [2024-11-19 11:59:48.923506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.716 [2024-11-19 11:59:48.923541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:35.716 [2024-11-19 11:59:48.923553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:28:35.716 [2024-11-19 11:59:48.923561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.716 [2024-11-19 11:59:48.923625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.716 [2024-11-19 11:59:48.923641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:35.716 [2024-11-19 11:59:48.923650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:35.716 [2024-11-19 11:59:48.923660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.928483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.928515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.717 [2024-11-19 11:59:48.928524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:28:35.717 [2024-11-19 11:59:48.928532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.928613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.928622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.717 [2024-11-19 11:59:48.928630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:35.717 [2024-11-19 11:59:48.928640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.928686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.928699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:35.717 [2024-11-19 11:59:48.928707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:35.717 [2024-11-19 11:59:48.928714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.928735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:35.717 [2024-11-19 11:59:48.930070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.930103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.717 [2024-11-19 11:59:48.930112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:28:35.717 [2024-11-19 11:59:48.930122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.930149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.930157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:35.717 [2024-11-19 11:59:48.930164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:35.717 [2024-11-19 11:59:48.930171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.930189] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:35.717 [2024-11-19 11:59:48.930208] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:35.717 [2024-11-19 11:59:48.930247] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:35.717 [2024-11-19 11:59:48.930261] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:35.717 [2024-11-19 11:59:48.930364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:35.717 [2024-11-19 11:59:48.930385] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:35.717 [2024-11-19 11:59:48.930395] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:35.717 [2024-11-19 11:59:48.930423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930435] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930443] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:35.717 [2024-11-19 11:59:48.930454] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:35.717 [2024-11-19 11:59:48.930461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:35.717 [2024-11-19 11:59:48.930468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:35.717 [2024-11-19 11:59:48.930476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.930486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:35.717 [2024-11-19 11:59:48.930493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:28:35.717 [2024-11-19 11:59:48.930500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.930582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.717 [2024-11-19 11:59:48.930591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:35.717 [2024-11-19 11:59:48.930600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:35.717 [2024-11-19 11:59:48.930607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.717 [2024-11-19 11:59:48.930701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:35.717 [2024-11-19 11:59:48.930714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:35.717 [2024-11-19 11:59:48.930723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:35.717 [2024-11-19 11:59:48.930748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:35.717 [2024-11-19 11:59:48.930771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.717 [2024-11-19 11:59:48.930787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:35.717 [2024-11-19 11:59:48.930795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:35.717 [2024-11-19 11:59:48.930802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.717 [2024-11-19 11:59:48.930812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:35.717 [2024-11-19 11:59:48.930820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:35.717 [2024-11-19 11:59:48.930828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:35.717 [2024-11-19 11:59:48.930843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:35.717 [2024-11-19 11:59:48.930865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:35.717 [2024-11-19 11:59:48.930887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:35.717 [2024-11-19 11:59:48.930909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:35.717 [2024-11-19 11:59:48.930936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.717 [2024-11-19 11:59:48.930951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:35.717 [2024-11-19 11:59:48.930959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:35.717 [2024-11-19 11:59:48.930966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.717 [2024-11-19 11:59:48.930973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:35.717 [2024-11-19 11:59:48.930981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:35.717 [2024-11-19 11:59:48.930988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.717 [2024-11-19 11:59:48.930995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:35.717 [2024-11-19 11:59:48.931002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:35.717 [2024-11-19 11:59:48.931010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.931018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:35.717 [2024-11-19 11:59:48.931027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:35.717 [2024-11-19 11:59:48.931035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.931042] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:35.717 [2024-11-19 11:59:48.931050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:35.717 [2024-11-19 11:59:48.931060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.717 [2024-11-19 11:59:48.931070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.717 [2024-11-19 11:59:48.931078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:35.717 [2024-11-19 11:59:48.931085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:35.717 [2024-11-19 11:59:48.931093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:35.717 [2024-11-19 11:59:48.931101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:35.717 [2024-11-19 11:59:48.931109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:35.717 [2024-11-19 11:59:48.931116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:35.717 [2024-11-19 11:59:48.931125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:35.717 [2024-11-19 11:59:48.931135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.717 [2024-11-19 11:59:48.931144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:35.717 [2024-11-19 11:59:48.931152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:35.717 [2024-11-19 11:59:48.931160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:35.717 [2024-11-19 11:59:48.931168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:35.717 [2024-11-19 11:59:48.931177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:35.718 [2024-11-19 11:59:48.931185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:35.718 [2024-11-19 11:59:48.931195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:35.718 [2024-11-19 11:59:48.931203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:35.718 [2024-11-19 11:59:48.931211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:35.718 [2024-11-19 11:59:48.931224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:35.718 [2024-11-19 11:59:48.931232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:35.718 [2024-11-19 11:59:48.931240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:35.718 [2024-11-19 11:59:48.931248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:35.718 [2024-11-19 11:59:48.931256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:35.718 [2024-11-19 11:59:48.931263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:35.718 [2024-11-19 11:59:48.931270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.718 [2024-11-19 11:59:48.931277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.718 [2024-11-19 11:59:48.931285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:35.718 [2024-11-19 11:59:48.931293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:35.718 [2024-11-19 11:59:48.931300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:35.718 [2024-11-19 11:59:48.931307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.931315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:35.718 [2024-11-19 11:59:48.931323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:28:35.718 [2024-11-19 11:59:48.931330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.950401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.950458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.718 [2024-11-19 11:59:48.950470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.029 ms 00:28:35.718 [2024-11-19 11:59:48.950480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.950569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.950579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:35.718 [2024-11-19 11:59:48.950587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:35.718 [2024-11-19 11:59:48.950595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.960071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.960115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.718 [2024-11-19 11:59:48.960129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.420 ms 00:28:35.718 [2024-11-19 11:59:48.960140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.960184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.960196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.718 [2024-11-19 11:59:48.960207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:35.718 [2024-11-19 11:59:48.960218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.960623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.960668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.718 [2024-11-19 11:59:48.960680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:28:35.718 [2024-11-19 11:59:48.960693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.960868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.960888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.718 [2024-11-19 11:59:48.960902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:28:35.718 [2024-11-19 11:59:48.960914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.965955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.965987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.718 [2024-11-19 11:59:48.966000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.993 ms 00:28:35.718 [2024-11-19 11:59:48.966007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.968684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:35.718 [2024-11-19 11:59:48.968724] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:35.718 [2024-11-19 11:59:48.968736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.968745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:35.718 [2024-11-19 11:59:48.968753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:28:35.718 [2024-11-19 11:59:48.968760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.983333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.983379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:35.718 [2024-11-19 11:59:48.983396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.537 ms 00:28:35.718 [2024-11-19 11:59:48.983415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.985421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.985451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:35.718 [2024-11-19 11:59:48.985459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:28:35.718 [2024-11-19 11:59:48.985466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.987193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.987223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:35.718 [2024-11-19 11:59:48.987231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:28:35.718 [2024-11-19 11:59:48.987237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:48.987586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:48.987603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.718 [2024-11-19 11:59:48.987617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:35.718 [2024-11-19 11:59:48.987624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.003333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.003390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:35.718 [2024-11-19 11:59:49.003419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.692 ms 00:28:35.718 [2024-11-19 11:59:49.003429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.011320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:35.718 [2024-11-19 11:59:49.013918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.013948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:35.718 [2024-11-19 11:59:49.013961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.435 ms 00:28:35.718 [2024-11-19 11:59:49.013969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.014028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.014038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:35.718 [2024-11-19 11:59:49.014048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:35.718 [2024-11-19 11:59:49.014055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.014143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.014154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:35.718 [2024-11-19 11:59:49.014162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:35.718 [2024-11-19 11:59:49.014169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.014193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.014205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.718 [2024-11-19 11:59:49.014213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:35.718 [2024-11-19 11:59:49.014220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.014250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:35.718 [2024-11-19 11:59:49.014259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.014267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:35.718 [2024-11-19 11:59:49.014274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:35.718 [2024-11-19 11:59:49.014283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.018293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.018332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.718 [2024-11-19 11:59:49.018346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:28:35.718 [2024-11-19 11:59:49.018353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.718 [2024-11-19 11:59:49.018433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.718 [2024-11-19 11:59:49.018443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.718 [2024-11-19 11:59:49.018451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:35.718 [2024-11-19 11:59:49.018458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.719 [2024-11-19 11:59:49.019328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.397 ms, result 0 00:28:36.664  [2024-11-19T11:59:51.451Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-19T11:59:52.384Z] Copying: 51/1024 [MB] (29 MBps) [2024-11-19T11:59:53.317Z] Copying: 94/1024 [MB] (43 MBps) [2024-11-19T11:59:54.252Z] Copying: 122/1024 [MB] (27 MBps) [2024-11-19T11:59:55.187Z] Copying: 137/1024 [MB] (15 MBps) [2024-11-19T11:59:56.121Z] Copying: 155/1024 [MB] (17 MBps) [2024-11-19T11:59:57.055Z] Copying: 172/1024 [MB] (16 MBps) [2024-11-19T11:59:58.428Z] Copying: 207/1024 [MB] (35 MBps) [2024-11-19T11:59:59.361Z] Copying: 228/1024 [MB] (20 MBps) [2024-11-19T12:00:00.294Z] Copying: 252/1024 [MB] (24 MBps) [2024-11-19T12:00:01.227Z] Copying: 272/1024 [MB] (20 MBps) [2024-11-19T12:00:02.162Z] Copying: 296/1024 [MB] (23 MBps) [2024-11-19T12:00:03.102Z] Copying: 314/1024 [MB] (17 MBps) [2024-11-19T12:00:04.037Z] Copying: 334/1024 [MB] (20 MBps) [2024-11-19T12:00:05.416Z] Copying: 348/1024 [MB] (13 MBps) [2024-11-19T12:00:06.357Z] Copying: 360/1024 [MB] (12 MBps) [2024-11-19T12:00:07.292Z] Copying: 372/1024 [MB] (12 MBps) [2024-11-19T12:00:08.226Z] Copying: 385/1024 [MB] (12 MBps) [2024-11-19T12:00:09.161Z] Copying: 403/1024 [MB] (17 MBps) [2024-11-19T12:00:10.095Z] Copying: 429/1024 [MB] (25 MBps) [2024-11-19T12:00:11.468Z] Copying: 473/1024 [MB] (43 MBps) [2024-11-19T12:00:12.035Z] Copying: 517/1024 [MB] (44 MBps) [2024-11-19T12:00:13.439Z] Copying: 556/1024 [MB] (39 MBps) [2024-11-19T12:00:14.373Z] Copying: 596/1024 [MB] (40 MBps) [2024-11-19T12:00:15.306Z] Copying: 640/1024 [MB] (44 MBps) [2024-11-19T12:00:16.241Z] Copying: 685/1024 [MB] (44 MBps) [2024-11-19T12:00:17.175Z] Copying: 729/1024 [MB] (44 MBps) [2024-11-19T12:00:18.108Z] Copying: 774/1024 [MB] (44 MBps) [2024-11-19T12:00:19.043Z] Copying: 819/1024 [MB] (45 MBps) [2024-11-19T12:00:20.415Z] Copying: 864/1024 [MB] (44 MBps) [2024-11-19T12:00:21.349Z] Copying: 909/1024 [MB] (45 MBps) [2024-11-19T12:00:22.362Z] Copying: 938/1024 [MB] (28 MBps) [2024-11-19T12:00:23.297Z] Copying: 955/1024 [MB] (17 MBps) [2024-11-19T12:00:24.229Z] Copying: 976/1024 [MB] (21 MBps) [2024-11-19T12:00:25.164Z] Copying: 999/1024 [MB] (23 MBps) [2024-11-19T12:00:25.164Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-19 12:00:24.872686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.752 [2024-11-19 12:00:24.872733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.752 [2024-11-19 12:00:24.872751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:11.752 [2024-11-19 12:00:24.872758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.752 [2024-11-19 12:00:24.872774] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.752 [2024-11-19 12:00:24.873193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.752 [2024-11-19 12:00:24.873207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.752 [2024-11-19 12:00:24.873214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:29:11.752 [2024-11-19 12:00:24.873220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.752 [2024-11-19 12:00:24.875435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.752 [2024-11-19 12:00:24.875464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.752 [2024-11-19 12:00:24.875472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:29:11.752 [2024-11-19 12:00:24.875478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.752 [2024-11-19 12:00:24.875501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.752 [2024-11-19 12:00:24.875514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:11.752 [2024-11-19 12:00:24.875521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:11.752 [2024-11-19 12:00:24.875530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.752 [2024-11-19 12:00:24.875567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.752 [2024-11-19 12:00:24.875573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:11.752 [2024-11-19 12:00:24.875582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:11.752 [2024-11-19 12:00:24.875587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.752 [2024-11-19 12:00:24.875598] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:11.752 [2024-11-19 12:00:24.875609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:11.752 [2024-11-19 12:00:24.875875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.875999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:11.753 [2024-11-19 12:00:24.876191] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:11.753 [2024-11-19 12:00:24.876197] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90c3f044-fb37-4a86-8d3e-acf3f349e6a6 00:29:11.753 [2024-11-19 12:00:24.876204] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:11.753 [2024-11-19 12:00:24.876210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:11.753 [2024-11-19 12:00:24.876215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:11.753 [2024-11-19 12:00:24.876221] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:11.753 [2024-11-19 12:00:24.876226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:11.753 [2024-11-19 12:00:24.876231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:11.753 [2024-11-19 12:00:24.876236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:11.753 [2024-11-19 12:00:24.876241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:11.753 [2024-11-19 12:00:24.876246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:11.753 [2024-11-19 12:00:24.876251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.753 [2024-11-19 12:00:24.876257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:11.753 [2024-11-19 12:00:24.876263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:29:11.753 [2024-11-19 12:00:24.876269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.877491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.753 [2024-11-19 12:00:24.877512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:11.753 [2024-11-19 12:00:24.877519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:29:11.753 [2024-11-19 12:00:24.877529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.877603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.753 [2024-11-19 12:00:24.877610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:11.753 [2024-11-19 12:00:24.877616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:11.753 [2024-11-19 12:00:24.877623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.881378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.753 [2024-11-19 12:00:24.881401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.753 [2024-11-19 12:00:24.881417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.753 [2024-11-19 12:00:24.881423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.881473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.753 [2024-11-19 12:00:24.881480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.753 [2024-11-19 12:00:24.881486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.753 [2024-11-19 12:00:24.881493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.881525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.753 [2024-11-19 12:00:24.881535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.753 [2024-11-19 12:00:24.881544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.753 [2024-11-19 12:00:24.881550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.881561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.753 [2024-11-19 12:00:24.881567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:11.753 [2024-11-19 12:00:24.881572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.753 [2024-11-19 12:00:24.881578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.889134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.753 [2024-11-19 12:00:24.889169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:11.753 [2024-11-19 12:00:24.889177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.753 [2024-11-19 12:00:24.889184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.753 [2024-11-19 12:00:24.895203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.753 [2024-11-19 12:00:24.895243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:11.754 [2024-11-19 12:00:24.895250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.754 [2024-11-19 12:00:24.895289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:11.754 [2024-11-19 12:00:24.895295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.754 [2024-11-19 12:00:24.895340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:11.754 [2024-11-19 12:00:24.895347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.754 [2024-11-19 12:00:24.895480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:11.754 [2024-11-19 12:00:24.895487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.754 [2024-11-19 12:00:24.895516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:11.754 [2024-11-19 12:00:24.895522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.754 [2024-11-19 12:00:24.895564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:11.754 [2024-11-19 12:00:24.895570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.754 [2024-11-19 12:00:24.895615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:11.754 [2024-11-19 12:00:24.895621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.754 [2024-11-19 12:00:24.895627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.754 [2024-11-19 12:00:24.895713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 23.005 ms, result 0 00:29:12.319 00:29:12.319 00:29:12.319 12:00:25 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:12.577 [2024-11-19 12:00:25.794559] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:12.577 [2024-11-19 12:00:25.794739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93251 ] 00:29:12.577 [2024-11-19 12:00:25.938114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.577 [2024-11-19 12:00:25.967267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.837 [2024-11-19 12:00:26.047814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:12.837 [2024-11-19 12:00:26.047874] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:12.837 [2024-11-19 12:00:26.197765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.197820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:12.837 [2024-11-19 12:00:26.197835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:12.837 [2024-11-19 12:00:26.197843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.197893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.197904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:12.837 [2024-11-19 12:00:26.197912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:12.837 [2024-11-19 12:00:26.197924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.197945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:12.837 [2024-11-19 12:00:26.198683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:12.837 [2024-11-19 12:00:26.198719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.198729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:12.837 [2024-11-19 12:00:26.198741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:29:12.837 [2024-11-19 12:00:26.198752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.199271] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:12.837 [2024-11-19 12:00:26.199320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.199330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:12.837 [2024-11-19 12:00:26.199340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:12.837 [2024-11-19 12:00:26.199348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.199464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.199482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:12.837 [2024-11-19 12:00:26.199494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:12.837 [2024-11-19 12:00:26.199501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.199751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.199768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:12.837 [2024-11-19 12:00:26.199786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:29:12.837 [2024-11-19 12:00:26.199793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.199870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.199888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:12.837 [2024-11-19 12:00:26.199896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:12.837 [2024-11-19 12:00:26.199903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.837 [2024-11-19 12:00:26.199923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.837 [2024-11-19 12:00:26.199931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:12.837 [2024-11-19 12:00:26.199938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:12.837 [2024-11-19 12:00:26.199949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.838 [2024-11-19 12:00:26.199969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:12.838 [2024-11-19 12:00:26.201384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.838 [2024-11-19 12:00:26.201429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:12.838 [2024-11-19 12:00:26.201440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:29:12.838 [2024-11-19 12:00:26.201447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.838 [2024-11-19 12:00:26.201473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.838 [2024-11-19 12:00:26.201480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:12.838 [2024-11-19 12:00:26.201487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:12.838 [2024-11-19 12:00:26.201494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.838 [2024-11-19 12:00:26.201522] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:12.838 [2024-11-19 12:00:26.201543] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:12.838 [2024-11-19 12:00:26.201582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:12.838 [2024-11-19 12:00:26.201600] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:12.838 [2024-11-19 12:00:26.201702] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:12.838 [2024-11-19 12:00:26.201712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:12.838 [2024-11-19 12:00:26.201722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:12.838 [2024-11-19 12:00:26.201732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:12.838 [2024-11-19 12:00:26.201743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:12.838 [2024-11-19 12:00:26.201752] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:12.838 [2024-11-19 12:00:26.201761] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:12.838 [2024-11-19 12:00:26.201768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:12.838 [2024-11-19 12:00:26.201775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:12.838 [2024-11-19 12:00:26.201782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.838 [2024-11-19 12:00:26.201789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:12.838 [2024-11-19 12:00:26.201796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:29:12.838 [2024-11-19 12:00:26.201803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.838 [2024-11-19 12:00:26.201884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.838 [2024-11-19 12:00:26.201891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:12.838 [2024-11-19 12:00:26.201898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:12.838 [2024-11-19 12:00:26.201907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.838 [2024-11-19 12:00:26.202015] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:12.838 [2024-11-19 12:00:26.202026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:12.838 [2024-11-19 12:00:26.202033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:12.838 [2024-11-19 12:00:26.202062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:12.838 [2024-11-19 12:00:26.202085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:12.838 [2024-11-19 12:00:26.202101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:12.838 [2024-11-19 12:00:26.202109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:12.838 [2024-11-19 12:00:26.202116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:12.838 [2024-11-19 12:00:26.202124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:12.838 [2024-11-19 12:00:26.202131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:12.838 [2024-11-19 12:00:26.202138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:12.838 [2024-11-19 12:00:26.202153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:12.838 [2024-11-19 12:00:26.202177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:12.838 [2024-11-19 12:00:26.202199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:12.838 [2024-11-19 12:00:26.202222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:12.838 [2024-11-19 12:00:26.202243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:12.838 [2024-11-19 12:00:26.202265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:12.838 [2024-11-19 12:00:26.202279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:12.838 [2024-11-19 12:00:26.202287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:12.838 [2024-11-19 12:00:26.202299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:12.838 [2024-11-19 12:00:26.202307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:12.838 [2024-11-19 12:00:26.202314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:12.838 [2024-11-19 12:00:26.202321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:12.838 [2024-11-19 12:00:26.202336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:12.838 [2024-11-19 12:00:26.202343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202351] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:12.838 [2024-11-19 12:00:26.202359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:12.838 [2024-11-19 12:00:26.202367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:12.838 [2024-11-19 12:00:26.202386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:12.838 [2024-11-19 12:00:26.202394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:12.838 [2024-11-19 12:00:26.202401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:12.838 [2024-11-19 12:00:26.202421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:12.838 [2024-11-19 12:00:26.202429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:12.838 [2024-11-19 12:00:26.202438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:12.838 [2024-11-19 12:00:26.202447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:12.838 [2024-11-19 12:00:26.202459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.838 [2024-11-19 12:00:26.202468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:12.838 [2024-11-19 12:00:26.202476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:12.838 [2024-11-19 12:00:26.202485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:12.838 [2024-11-19 12:00:26.202493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:12.838 [2024-11-19 12:00:26.202501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:12.838 [2024-11-19 12:00:26.202509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:12.838 [2024-11-19 12:00:26.202517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:12.838 [2024-11-19 12:00:26.202526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:12.838 [2024-11-19 12:00:26.202533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:12.838 [2024-11-19 12:00:26.202541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:12.838 [2024-11-19 12:00:26.202550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:12.839 [2024-11-19 12:00:26.202562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:12.839 [2024-11-19 12:00:26.202570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:12.839 [2024-11-19 12:00:26.202581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:12.839 [2024-11-19 12:00:26.202590] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:12.839 [2024-11-19 12:00:26.202599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:12.839 [2024-11-19 12:00:26.202610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:12.839 [2024-11-19 12:00:26.202618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:12.839 [2024-11-19 12:00:26.202627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:12.839 [2024-11-19 12:00:26.202635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:12.839 [2024-11-19 12:00:26.202643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.202651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:12.839 [2024-11-19 12:00:26.202660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:29:12.839 [2024-11-19 12:00:26.202668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.217347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.217387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:12.839 [2024-11-19 12:00:26.217402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.639 ms 00:29:12.839 [2024-11-19 12:00:26.217423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.217509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.217519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:12.839 [2024-11-19 12:00:26.217527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:29:12.839 [2024-11-19 12:00:26.217534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.226984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.227030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:12.839 [2024-11-19 12:00:26.227046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.395 ms 00:29:12.839 [2024-11-19 12:00:26.227057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.227099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.227117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:12.839 [2024-11-19 12:00:26.227128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:12.839 [2024-11-19 12:00:26.227139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.227235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.227249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:12.839 [2024-11-19 12:00:26.227261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:12.839 [2024-11-19 12:00:26.227275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.227461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.227480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:12.839 [2024-11-19 12:00:26.227491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:29:12.839 [2024-11-19 12:00:26.227502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.232689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.232720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:12.839 [2024-11-19 12:00:26.232729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.159 ms 00:29:12.839 [2024-11-19 12:00:26.232736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.232851] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:12.839 [2024-11-19 12:00:26.232863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:12.839 [2024-11-19 12:00:26.232875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.232882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:12.839 [2024-11-19 12:00:26.232890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:12.839 [2024-11-19 12:00:26.232897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.245163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.245199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:12.839 [2024-11-19 12:00:26.245209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.251 ms 00:29:12.839 [2024-11-19 12:00:26.245216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.245330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.245341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:12.839 [2024-11-19 12:00:26.245349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:12.839 [2024-11-19 12:00:26.245357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.245401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.245434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:12.839 [2024-11-19 12:00:26.245442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:12.839 [2024-11-19 12:00:26.245454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.245758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.245774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:12.839 [2024-11-19 12:00:26.245782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:29:12.839 [2024-11-19 12:00:26.245789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:12.839 [2024-11-19 12:00:26.245805] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:12.839 [2024-11-19 12:00:26.245814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:12.839 [2024-11-19 12:00:26.245821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:12.839 [2024-11-19 12:00:26.245832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:12.839 [2024-11-19 12:00:26.245845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.253721] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:13.098 [2024-11-19 12:00:26.253847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.253857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:13.098 [2024-11-19 12:00:26.253865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.986 ms 00:29:13.098 [2024-11-19 12:00:26.253872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.256135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.256163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:13.098 [2024-11-19 12:00:26.256173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:29:13.098 [2024-11-19 12:00:26.256180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.256245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.256254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:13.098 [2024-11-19 12:00:26.256262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:13.098 [2024-11-19 12:00:26.256269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.256305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.256316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:13.098 [2024-11-19 12:00:26.256324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:13.098 [2024-11-19 12:00:26.256331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.256359] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:13.098 [2024-11-19 12:00:26.256367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.256376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:13.098 [2024-11-19 12:00:26.256383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:13.098 [2024-11-19 12:00:26.256390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.260584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.260618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:13.098 [2024-11-19 12:00:26.260634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.164 ms 00:29:13.098 [2024-11-19 12:00:26.260642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.260710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.098 [2024-11-19 12:00:26.260720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:13.098 [2024-11-19 12:00:26.260727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:13.098 [2024-11-19 12:00:26.260734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.098 [2024-11-19 12:00:26.261586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 63.417 ms, result 0 00:29:14.032  [2024-11-19T12:00:28.817Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-19T12:00:29.748Z] Copying: 58/1024 [MB] (34 MBps) [2024-11-19T12:00:30.681Z] Copying: 101/1024 [MB] (43 MBps) [2024-11-19T12:00:31.615Z] Copying: 144/1024 [MB] (43 MBps) [2024-11-19T12:00:32.552Z] Copying: 187/1024 [MB] (42 MBps) [2024-11-19T12:00:33.487Z] Copying: 221/1024 [MB] (34 MBps) [2024-11-19T12:00:34.863Z] Copying: 252/1024 [MB] (30 MBps) [2024-11-19T12:00:35.437Z] Copying: 275/1024 [MB] (23 MBps) [2024-11-19T12:00:36.813Z] Copying: 299/1024 [MB] (23 MBps) [2024-11-19T12:00:37.756Z] Copying: 329/1024 [MB] (30 MBps) [2024-11-19T12:00:38.689Z] Copying: 383/1024 [MB] (53 MBps) [2024-11-19T12:00:39.623Z] Copying: 428/1024 [MB] (45 MBps) [2024-11-19T12:00:40.557Z] Copying: 476/1024 [MB] (48 MBps) [2024-11-19T12:00:41.492Z] Copying: 526/1024 [MB] (50 MBps) [2024-11-19T12:00:42.866Z] Copying: 571/1024 [MB] (44 MBps) [2024-11-19T12:00:43.433Z] Copying: 621/1024 [MB] (50 MBps) [2024-11-19T12:00:44.807Z] Copying: 672/1024 [MB] (50 MBps) [2024-11-19T12:00:45.740Z] Copying: 711/1024 [MB] (39 MBps) [2024-11-19T12:00:46.673Z] Copying: 759/1024 [MB] (48 MBps) [2024-11-19T12:00:47.607Z] Copying: 811/1024 [MB] (52 MBps) [2024-11-19T12:00:48.540Z] Copying: 863/1024 [MB] (52 MBps) [2024-11-19T12:00:49.475Z] Copying: 914/1024 [MB] (50 MBps) [2024-11-19T12:00:50.854Z] Copying: 958/1024 [MB] (43 MBps) [2024-11-19T12:00:50.854Z] Copying: 1005/1024 [MB] (46 MBps) [2024-11-19T12:00:52.229Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-11-19 12:00:51.924833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.817 [2024-11-19 12:00:51.925060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:38.817 [2024-11-19 12:00:51.925123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:38.817 [2024-11-19 12:00:51.925156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.817 [2024-11-19 12:00:51.925195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:38.817 [2024-11-19 12:00:51.925675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.817 [2024-11-19 12:00:51.925777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:38.817 [2024-11-19 12:00:51.925830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:29:38.817 [2024-11-19 12:00:51.925853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.817 [2024-11-19 12:00:51.926085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.817 [2024-11-19 12:00:51.926117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:38.817 [2024-11-19 12:00:51.926138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:29:38.817 [2024-11-19 12:00:51.926202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.817 [2024-11-19 12:00:51.926251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.817 [2024-11-19 12:00:51.926278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:38.817 [2024-11-19 12:00:51.926337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:38.817 [2024-11-19 12:00:51.926359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.817 [2024-11-19 12:00:51.926458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.817 [2024-11-19 12:00:51.926489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:38.817 [2024-11-19 12:00:51.926546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:38.817 [2024-11-19 12:00:51.926568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.817 [2024-11-19 12:00:51.926593] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:38.817 [2024-11-19 12:00:51.926617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.926758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.926789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.926817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.926844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.926874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.926901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.927984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.928015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.928043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.928070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.928098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:38.817 [2024-11-19 12:00:51.928158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:38.818 [2024-11-19 12:00:51.928886] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:38.818 [2024-11-19 12:00:51.928894] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90c3f044-fb37-4a86-8d3e-acf3f349e6a6 00:29:38.818 [2024-11-19 12:00:51.928904] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:38.818 [2024-11-19 12:00:51.928912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:38.818 [2024-11-19 12:00:51.928919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:38.818 [2024-11-19 12:00:51.928926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:38.818 [2024-11-19 12:00:51.928933] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:38.818 [2024-11-19 12:00:51.928943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:38.818 [2024-11-19 12:00:51.928950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:38.818 [2024-11-19 12:00:51.928956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:38.818 [2024-11-19 12:00:51.928962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:38.818 [2024-11-19 12:00:51.928971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.818 [2024-11-19 12:00:51.928981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:38.818 [2024-11-19 12:00:51.928988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:29:38.818 [2024-11-19 12:00:51.929001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.818 [2024-11-19 12:00:51.930347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.818 [2024-11-19 12:00:51.930368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:38.818 [2024-11-19 12:00:51.930376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:29:38.818 [2024-11-19 12:00:51.930383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.818 [2024-11-19 12:00:51.930468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:38.818 [2024-11-19 12:00:51.930476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:38.818 [2024-11-19 12:00:51.930484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:38.818 [2024-11-19 12:00:51.930491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.818 [2024-11-19 12:00:51.934817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.818 [2024-11-19 12:00:51.934843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:38.818 [2024-11-19 12:00:51.934854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.818 [2024-11-19 12:00:51.934861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.818 [2024-11-19 12:00:51.934917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.818 [2024-11-19 12:00:51.934925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:38.818 [2024-11-19 12:00:51.934932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.818 [2024-11-19 12:00:51.934940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.818 [2024-11-19 12:00:51.934990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.934999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:38.819 [2024-11-19 12:00:51.935006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.935013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.935027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.935035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:38.819 [2024-11-19 12:00:51.935043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.935049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.944167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.944206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:38.819 [2024-11-19 12:00:51.944217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.944225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.952907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.952946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:38.819 [2024-11-19 12:00:51.952956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.952964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.952991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.952999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:38.819 [2024-11-19 12:00:51.953007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.953014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.953061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.953069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:38.819 [2024-11-19 12:00:51.953077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.953084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.953129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.953145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:38.819 [2024-11-19 12:00:51.953153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.953160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.953186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.953199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:38.819 [2024-11-19 12:00:51.953207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.953214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.953247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.953259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:38.819 [2024-11-19 12:00:51.953269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.953276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.953313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:38.819 [2024-11-19 12:00:51.953327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:38.819 [2024-11-19 12:00:51.953335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:38.819 [2024-11-19 12:00:51.953341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:38.819 [2024-11-19 12:00:51.953497] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.590 ms, result 0 00:29:38.819 00:29:38.819 00:29:38.819 12:00:52 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:41.345 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:41.345 12:00:54 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:41.345 [2024-11-19 12:00:54.252217] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:29:41.345 [2024-11-19 12:00:54.252370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93545 ] 00:29:41.345 [2024-11-19 12:00:54.392649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.345 [2024-11-19 12:00:54.425678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.345 [2024-11-19 12:00:54.512621] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:41.345 [2024-11-19 12:00:54.512692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:41.345 [2024-11-19 12:00:54.665421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.665478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:41.345 [2024-11-19 12:00:54.665493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:41.345 [2024-11-19 12:00:54.665501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.665545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.665555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:41.345 [2024-11-19 12:00:54.665567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:41.345 [2024-11-19 12:00:54.665579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.665601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:41.345 [2024-11-19 12:00:54.666258] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:41.345 [2024-11-19 12:00:54.666297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.666307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:41.345 [2024-11-19 12:00:54.666319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:29:41.345 [2024-11-19 12:00:54.666326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.666901] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:41.345 [2024-11-19 12:00:54.666951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.666961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:41.345 [2024-11-19 12:00:54.666970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:41.345 [2024-11-19 12:00:54.666978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.667063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.667078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:41.345 [2024-11-19 12:00:54.667088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:41.345 [2024-11-19 12:00:54.667095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.667344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.667355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:41.345 [2024-11-19 12:00:54.667364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:29:41.345 [2024-11-19 12:00:54.667375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.667475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.667488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:41.345 [2024-11-19 12:00:54.667496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:41.345 [2024-11-19 12:00:54.667503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.667525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.667534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:41.345 [2024-11-19 12:00:54.667542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:41.345 [2024-11-19 12:00:54.667550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.667569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:41.345 [2024-11-19 12:00:54.668981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.669012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:41.345 [2024-11-19 12:00:54.669027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:29:41.345 [2024-11-19 12:00:54.669034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.669061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.669069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:41.345 [2024-11-19 12:00:54.669076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:41.345 [2024-11-19 12:00:54.669086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.669115] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:41.345 [2024-11-19 12:00:54.669132] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:41.345 [2024-11-19 12:00:54.669170] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:41.345 [2024-11-19 12:00:54.669185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:41.345 [2024-11-19 12:00:54.669285] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:41.345 [2024-11-19 12:00:54.669300] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:41.345 [2024-11-19 12:00:54.669316] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:41.345 [2024-11-19 12:00:54.669329] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:41.345 [2024-11-19 12:00:54.669338] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:41.345 [2024-11-19 12:00:54.669347] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:41.345 [2024-11-19 12:00:54.669358] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:41.345 [2024-11-19 12:00:54.669364] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:41.345 [2024-11-19 12:00:54.669371] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:41.345 [2024-11-19 12:00:54.669378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.669385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:41.345 [2024-11-19 12:00:54.669392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:29:41.345 [2024-11-19 12:00:54.669399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.669496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.345 [2024-11-19 12:00:54.669505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:41.345 [2024-11-19 12:00:54.669512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:41.345 [2024-11-19 12:00:54.669521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.345 [2024-11-19 12:00:54.669631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:41.345 [2024-11-19 12:00:54.669649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:41.345 [2024-11-19 12:00:54.669659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:41.345 [2024-11-19 12:00:54.669667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.345 [2024-11-19 12:00:54.669678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:41.345 [2024-11-19 12:00:54.669685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:41.345 [2024-11-19 12:00:54.669693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:41.345 [2024-11-19 12:00:54.669701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:41.345 [2024-11-19 12:00:54.669709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:41.345 [2024-11-19 12:00:54.669716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:41.345 [2024-11-19 12:00:54.669726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:41.345 [2024-11-19 12:00:54.669735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:41.345 [2024-11-19 12:00:54.669742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:41.345 [2024-11-19 12:00:54.669750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:41.345 [2024-11-19 12:00:54.669758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:41.346 [2024-11-19 12:00:54.669765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:41.346 [2024-11-19 12:00:54.669780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:41.346 [2024-11-19 12:00:54.669787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:41.346 [2024-11-19 12:00:54.669804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.346 [2024-11-19 12:00:54.669820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:41.346 [2024-11-19 12:00:54.669827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.346 [2024-11-19 12:00:54.669842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:41.346 [2024-11-19 12:00:54.669849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.346 [2024-11-19 12:00:54.669864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:41.346 [2024-11-19 12:00:54.669871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:41.346 [2024-11-19 12:00:54.669886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:41.346 [2024-11-19 12:00:54.669894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:41.346 [2024-11-19 12:00:54.669908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:41.346 [2024-11-19 12:00:54.669916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:41.346 [2024-11-19 12:00:54.669928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:41.346 [2024-11-19 12:00:54.669936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:41.346 [2024-11-19 12:00:54.669943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:41.346 [2024-11-19 12:00:54.669950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:41.346 [2024-11-19 12:00:54.669966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:41.346 [2024-11-19 12:00:54.669974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.346 [2024-11-19 12:00:54.669982] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:41.346 [2024-11-19 12:00:54.669991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:41.346 [2024-11-19 12:00:54.669999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:41.346 [2024-11-19 12:00:54.670007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:41.346 [2024-11-19 12:00:54.670015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:41.346 [2024-11-19 12:00:54.670022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:41.346 [2024-11-19 12:00:54.670029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:41.346 [2024-11-19 12:00:54.670035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:41.346 [2024-11-19 12:00:54.670041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:41.346 [2024-11-19 12:00:54.670049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:41.346 [2024-11-19 12:00:54.670058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:41.346 [2024-11-19 12:00:54.670068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:41.346 [2024-11-19 12:00:54.670083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:41.346 [2024-11-19 12:00:54.670091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:41.346 [2024-11-19 12:00:54.670098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:41.346 [2024-11-19 12:00:54.670105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:41.346 [2024-11-19 12:00:54.670111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:41.346 [2024-11-19 12:00:54.670118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:41.346 [2024-11-19 12:00:54.670125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:41.346 [2024-11-19 12:00:54.670131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:41.346 [2024-11-19 12:00:54.670138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:41.346 [2024-11-19 12:00:54.670180] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:41.346 [2024-11-19 12:00:54.670188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:41.346 [2024-11-19 12:00:54.670202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:41.346 [2024-11-19 12:00:54.670208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:41.346 [2024-11-19 12:00:54.670216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:41.346 [2024-11-19 12:00:54.670223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.670230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:41.346 [2024-11-19 12:00:54.670238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:29:41.346 [2024-11-19 12:00:54.670244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.684562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.684611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:41.346 [2024-11-19 12:00:54.684630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.279 ms 00:29:41.346 [2024-11-19 12:00:54.684642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.684742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.684752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:41.346 [2024-11-19 12:00:54.684761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:41.346 [2024-11-19 12:00:54.684773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.693874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.693927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:41.346 [2024-11-19 12:00:54.693943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.048 ms 00:29:41.346 [2024-11-19 12:00:54.693953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.693989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.694001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:41.346 [2024-11-19 12:00:54.694012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:41.346 [2024-11-19 12:00:54.694021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.694107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.694127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:41.346 [2024-11-19 12:00:54.694138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:41.346 [2024-11-19 12:00:54.694151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.694295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.694306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:41.346 [2024-11-19 12:00:54.694317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:29:41.346 [2024-11-19 12:00:54.694327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.699548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.699585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:41.346 [2024-11-19 12:00:54.699603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:29:41.346 [2024-11-19 12:00:54.699613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.699729] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:41.346 [2024-11-19 12:00:54.699742] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:41.346 [2024-11-19 12:00:54.699754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.346 [2024-11-19 12:00:54.699761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:41.346 [2024-11-19 12:00:54.699769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:41.346 [2024-11-19 12:00:54.699776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.346 [2024-11-19 12:00:54.712030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.712075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:41.347 [2024-11-19 12:00:54.712085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.240 ms 00:29:41.347 [2024-11-19 12:00:54.712092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.712206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.712214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:41.347 [2024-11-19 12:00:54.712225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:41.347 [2024-11-19 12:00:54.712232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.712271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.712283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:41.347 [2024-11-19 12:00:54.712290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:41.347 [2024-11-19 12:00:54.712299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.712607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.712628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:41.347 [2024-11-19 12:00:54.712642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:29:41.347 [2024-11-19 12:00:54.712649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.712665] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:41.347 [2024-11-19 12:00:54.712675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.712682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:41.347 [2024-11-19 12:00:54.712692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:41.347 [2024-11-19 12:00:54.712701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.720585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:41.347 [2024-11-19 12:00:54.720704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.720713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:41.347 [2024-11-19 12:00:54.720721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.987 ms 00:29:41.347 [2024-11-19 12:00:54.720728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.722949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.722974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:41.347 [2024-11-19 12:00:54.722983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.204 ms 00:29:41.347 [2024-11-19 12:00:54.722990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.723051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.723060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:41.347 [2024-11-19 12:00:54.723075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:41.347 [2024-11-19 12:00:54.723082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.723118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.723129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:41.347 [2024-11-19 12:00:54.723137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:41.347 [2024-11-19 12:00:54.723148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.723173] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:41.347 [2024-11-19 12:00:54.723182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.723190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:41.347 [2024-11-19 12:00:54.723197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:41.347 [2024-11-19 12:00:54.723204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.727376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.727421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:41.347 [2024-11-19 12:00:54.727435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:29:41.347 [2024-11-19 12:00:54.727443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.727514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:41.347 [2024-11-19 12:00:54.727523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:41.347 [2024-11-19 12:00:54.727530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:41.347 [2024-11-19 12:00:54.727537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:41.347 [2024-11-19 12:00:54.728354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.573 ms, result 0 00:29:42.717  [2024-11-19T12:00:57.072Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-19T12:00:58.016Z] Copying: 47/1024 [MB] (25 MBps) [2024-11-19T12:00:58.960Z] Copying: 67/1024 [MB] (19 MBps) [2024-11-19T12:00:59.935Z] Copying: 84/1024 [MB] (16 MBps) [2024-11-19T12:01:00.872Z] Copying: 104/1024 [MB] (19 MBps) [2024-11-19T12:01:01.805Z] Copying: 120/1024 [MB] (16 MBps) [2024-11-19T12:01:02.746Z] Copying: 139/1024 [MB] (19 MBps) [2024-11-19T12:01:04.125Z] Copying: 161/1024 [MB] (22 MBps) [2024-11-19T12:01:05.064Z] Copying: 172/1024 [MB] (10 MBps) [2024-11-19T12:01:06.000Z] Copying: 182/1024 [MB] (10 MBps) [2024-11-19T12:01:06.932Z] Copying: 195/1024 [MB] (12 MBps) [2024-11-19T12:01:07.868Z] Copying: 207/1024 [MB] (11 MBps) [2024-11-19T12:01:08.813Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-19T12:01:09.757Z] Copying: 228/1024 [MB] (10 MBps) [2024-11-19T12:01:11.140Z] Copying: 238/1024 [MB] (10 MBps) [2024-11-19T12:01:12.082Z] Copying: 254604/1048576 [kB] (9996 kBps) [2024-11-19T12:01:13.026Z] Copying: 258/1024 [MB] (10 MBps) [2024-11-19T12:01:13.969Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-19T12:01:14.907Z] Copying: 283/1024 [MB] (14 MBps) [2024-11-19T12:01:15.846Z] Copying: 296/1024 [MB] (12 MBps) [2024-11-19T12:01:16.790Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-19T12:01:18.176Z] Copying: 324532/1048576 [kB] (10064 kBps) [2024-11-19T12:01:18.747Z] Copying: 334212/1048576 [kB] (9680 kBps) [2024-11-19T12:01:20.132Z] Copying: 344248/1048576 [kB] (10036 kBps) [2024-11-19T12:01:21.076Z] Copying: 353348/1048576 [kB] (9100 kBps) [2024-11-19T12:01:22.021Z] Copying: 362720/1048576 [kB] (9372 kBps) [2024-11-19T12:01:22.964Z] Copying: 372464/1048576 [kB] (9744 kBps) [2024-11-19T12:01:23.908Z] Copying: 382660/1048576 [kB] (10196 kBps) [2024-11-19T12:01:24.852Z] Copying: 392900/1048576 [kB] (10240 kBps) [2024-11-19T12:01:25.796Z] Copying: 394/1024 [MB] (10 MBps) [2024-11-19T12:01:27.184Z] Copying: 413764/1048576 [kB] (9656 kBps) [2024-11-19T12:01:27.763Z] Copying: 423740/1048576 [kB] (9976 kBps) [2024-11-19T12:01:29.148Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-19T12:01:30.094Z] Copying: 434/1024 [MB] (10 MBps) [2024-11-19T12:01:31.036Z] Copying: 444/1024 [MB] (10 MBps) [2024-11-19T12:01:31.979Z] Copying: 455/1024 [MB] (10 MBps) [2024-11-19T12:01:32.924Z] Copying: 465/1024 [MB] (10 MBps) [2024-11-19T12:01:33.868Z] Copying: 486904/1048576 [kB] (10044 kBps) [2024-11-19T12:01:34.811Z] Copying: 496908/1048576 [kB] (10004 kBps) [2024-11-19T12:01:35.754Z] Copying: 506608/1048576 [kB] (9700 kBps) [2024-11-19T12:01:37.138Z] Copying: 516124/1048576 [kB] (9516 kBps) [2024-11-19T12:01:38.079Z] Copying: 525560/1048576 [kB] (9436 kBps) [2024-11-19T12:01:39.022Z] Copying: 535000/1048576 [kB] (9440 kBps) [2024-11-19T12:01:39.964Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-19T12:01:40.905Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-19T12:01:41.849Z] Copying: 565704/1048576 [kB] (9744 kBps) [2024-11-19T12:01:42.793Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-19T12:01:44.183Z] Copying: 573/1024 [MB] (10 MBps) [2024-11-19T12:01:44.756Z] Copying: 583/1024 [MB] (10 MBps) [2024-11-19T12:01:46.145Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-19T12:01:47.090Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-19T12:01:48.044Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-19T12:01:48.987Z] Copying: 625/1024 [MB] (11 MBps) [2024-11-19T12:01:49.930Z] Copying: 667/1024 [MB] (42 MBps) [2024-11-19T12:01:50.918Z] Copying: 711/1024 [MB] (43 MBps) [2024-11-19T12:01:51.858Z] Copying: 756/1024 [MB] (44 MBps) [2024-11-19T12:01:52.800Z] Copying: 800/1024 [MB] (43 MBps) [2024-11-19T12:01:53.743Z] Copying: 842/1024 [MB] (42 MBps) [2024-11-19T12:01:55.127Z] Copying: 871/1024 [MB] (28 MBps) [2024-11-19T12:01:56.069Z] Copying: 901/1024 [MB] (29 MBps) [2024-11-19T12:01:57.010Z] Copying: 932/1024 [MB] (31 MBps) [2024-11-19T12:01:57.949Z] Copying: 963/1024 [MB] (30 MBps) [2024-11-19T12:01:58.889Z] Copying: 996/1024 [MB] (33 MBps) [2024-11-19T12:01:59.456Z] Copying: 1023/1024 [MB] (26 MBps) [2024-11-19T12:01:59.456Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 12:01:59.248053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.044 [2024-11-19 12:01:59.248128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:46.044 [2024-11-19 12:01:59.248143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:46.044 [2024-11-19 12:01:59.248152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.044 [2024-11-19 12:01:59.251856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:46.044 [2024-11-19 12:01:59.253741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.044 [2024-11-19 12:01:59.253777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:46.044 [2024-11-19 12:01:59.253789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:30:46.044 [2024-11-19 12:01:59.253798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.044 [2024-11-19 12:01:59.264672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.044 [2024-11-19 12:01:59.264706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:46.044 [2024-11-19 12:01:59.264722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.704 ms 00:30:46.044 [2024-11-19 12:01:59.264730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.044 [2024-11-19 12:01:59.264761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.044 [2024-11-19 12:01:59.264771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:46.044 [2024-11-19 12:01:59.264779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:46.044 [2024-11-19 12:01:59.264792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.044 [2024-11-19 12:01:59.264841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.044 [2024-11-19 12:01:59.264850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:46.044 [2024-11-19 12:01:59.264859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:46.044 [2024-11-19 12:01:59.264873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.044 [2024-11-19 12:01:59.264886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:46.045 [2024-11-19 12:01:59.264898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:30:46.045 [2024-11-19 12:01:59.264909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.264996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:46.045 [2024-11-19 12:01:59.265583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:46.046 [2024-11-19 12:01:59.265694] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:46.046 [2024-11-19 12:01:59.265703] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90c3f044-fb37-4a86-8d3e-acf3f349e6a6 00:30:46.046 [2024-11-19 12:01:59.265714] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:30:46.046 [2024-11-19 12:01:59.265723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:30:46.046 [2024-11-19 12:01:59.265730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:30:46.046 [2024-11-19 12:01:59.265741] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:30:46.046 [2024-11-19 12:01:59.265749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:46.046 [2024-11-19 12:01:59.265758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:46.046 [2024-11-19 12:01:59.265767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:46.046 [2024-11-19 12:01:59.265774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:46.046 [2024-11-19 12:01:59.265782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:46.046 [2024-11-19 12:01:59.265790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.046 [2024-11-19 12:01:59.265798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:46.046 [2024-11-19 12:01:59.265807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:30:46.046 [2024-11-19 12:01:59.265815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.267685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.046 [2024-11-19 12:01:59.267711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:46.046 [2024-11-19 12:01:59.267721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:30:46.046 [2024-11-19 12:01:59.267729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.267828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.046 [2024-11-19 12:01:59.267838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:46.046 [2024-11-19 12:01:59.267847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:30:46.046 [2024-11-19 12:01:59.267855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.273328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.273355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:46.046 [2024-11-19 12:01:59.273368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.273376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.273447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.273457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:46.046 [2024-11-19 12:01:59.273466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.273474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.273523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.273534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:46.046 [2024-11-19 12:01:59.273542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.273549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.273573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.273581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:46.046 [2024-11-19 12:01:59.273590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.273597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.284972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.285011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:46.046 [2024-11-19 12:01:59.285023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.285037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.294742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.294791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:46.046 [2024-11-19 12:01:59.294803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.294810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.294859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.294869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:46.046 [2024-11-19 12:01:59.294878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.294885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.294915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.294925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:46.046 [2024-11-19 12:01:59.294933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.294941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.294990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.294999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:46.046 [2024-11-19 12:01:59.295008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.295015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.295043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.295062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:46.046 [2024-11-19 12:01:59.295070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.295078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.295120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.295129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:46.046 [2024-11-19 12:01:59.295138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.295145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.295198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:46.046 [2024-11-19 12:01:59.295340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:46.046 [2024-11-19 12:01:59.295350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:46.046 [2024-11-19 12:01:59.295358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.046 [2024-11-19 12:01:59.295546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 49.100 ms, result 0 00:30:46.618 00:30:46.618 00:30:46.618 12:02:00 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:46.879 [2024-11-19 12:02:00.072695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:30:46.879 [2024-11-19 12:02:00.072812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94204 ] 00:30:46.879 [2024-11-19 12:02:00.208051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.879 [2024-11-19 12:02:00.250735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.141 [2024-11-19 12:02:00.354070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:47.142 [2024-11-19 12:02:00.354139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:47.142 [2024-11-19 12:02:00.508526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.508587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:47.142 [2024-11-19 12:02:00.508604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:47.142 [2024-11-19 12:02:00.508613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.508662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.508673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:47.142 [2024-11-19 12:02:00.508682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:47.142 [2024-11-19 12:02:00.508696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.508715] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:47.142 [2024-11-19 12:02:00.509135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:47.142 [2024-11-19 12:02:00.509175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.509185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:47.142 [2024-11-19 12:02:00.509196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:30:47.142 [2024-11-19 12:02:00.509204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.509542] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:47.142 [2024-11-19 12:02:00.509570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.509579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:47.142 [2024-11-19 12:02:00.509589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:47.142 [2024-11-19 12:02:00.509597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.509647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.509659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:47.142 [2024-11-19 12:02:00.509669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:47.142 [2024-11-19 12:02:00.509677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.509909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.509928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:47.142 [2024-11-19 12:02:00.509937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:30:47.142 [2024-11-19 12:02:00.509947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.510023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.510036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:47.142 [2024-11-19 12:02:00.510044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:47.142 [2024-11-19 12:02:00.510053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.510076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.510085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:47.142 [2024-11-19 12:02:00.510093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:47.142 [2024-11-19 12:02:00.510100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.510119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:47.142 [2024-11-19 12:02:00.512019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.512049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:47.142 [2024-11-19 12:02:00.512063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 00:30:47.142 [2024-11-19 12:02:00.512071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.512104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.512116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:47.142 [2024-11-19 12:02:00.512126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:47.142 [2024-11-19 12:02:00.512134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.512163] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:47.142 [2024-11-19 12:02:00.512182] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:47.142 [2024-11-19 12:02:00.512223] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:47.142 [2024-11-19 12:02:00.512244] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:47.142 [2024-11-19 12:02:00.512348] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:47.142 [2024-11-19 12:02:00.512359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:47.142 [2024-11-19 12:02:00.512370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:47.142 [2024-11-19 12:02:00.512380] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:47.142 [2024-11-19 12:02:00.512425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:47.142 [2024-11-19 12:02:00.512433] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:47.142 [2024-11-19 12:02:00.512442] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:47.142 [2024-11-19 12:02:00.512450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.512457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:47.142 [2024-11-19 12:02:00.512465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:30:47.142 [2024-11-19 12:02:00.512473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.512555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.142 [2024-11-19 12:02:00.512564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:47.142 [2024-11-19 12:02:00.512571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:47.142 [2024-11-19 12:02:00.512581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.142 [2024-11-19 12:02:00.512697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:47.142 [2024-11-19 12:02:00.512720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:47.142 [2024-11-19 12:02:00.512730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:47.142 [2024-11-19 12:02:00.512756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:47.142 [2024-11-19 12:02:00.512783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:47.142 [2024-11-19 12:02:00.512800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:47.142 [2024-11-19 12:02:00.512808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:47.142 [2024-11-19 12:02:00.512817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:47.142 [2024-11-19 12:02:00.512825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:47.142 [2024-11-19 12:02:00.512833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:47.142 [2024-11-19 12:02:00.512841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:47.142 [2024-11-19 12:02:00.512857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:47.142 [2024-11-19 12:02:00.512880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:47.142 [2024-11-19 12:02:00.512906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:47.142 [2024-11-19 12:02:00.512928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:47.142 [2024-11-19 12:02:00.512951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:47.142 [2024-11-19 12:02:00.512967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:47.142 [2024-11-19 12:02:00.512975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:47.142 [2024-11-19 12:02:00.512983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:47.142 [2024-11-19 12:02:00.512990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:47.142 [2024-11-19 12:02:00.512998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:47.143 [2024-11-19 12:02:00.513006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:47.143 [2024-11-19 12:02:00.513013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:47.143 [2024-11-19 12:02:00.513020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:47.143 [2024-11-19 12:02:00.513030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:47.143 [2024-11-19 12:02:00.513038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:47.143 [2024-11-19 12:02:00.513046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:47.143 [2024-11-19 12:02:00.513057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:47.143 [2024-11-19 12:02:00.513064] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:47.143 [2024-11-19 12:02:00.513072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:47.143 [2024-11-19 12:02:00.513079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:47.143 [2024-11-19 12:02:00.513087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:47.143 [2024-11-19 12:02:00.513094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:47.143 [2024-11-19 12:02:00.513102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:47.143 [2024-11-19 12:02:00.513109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:47.143 [2024-11-19 12:02:00.513116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:47.143 [2024-11-19 12:02:00.513122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:47.143 [2024-11-19 12:02:00.513129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:47.143 [2024-11-19 12:02:00.513137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:47.143 [2024-11-19 12:02:00.513148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:47.143 [2024-11-19 12:02:00.513165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:47.143 [2024-11-19 12:02:00.513173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:47.143 [2024-11-19 12:02:00.513181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:47.143 [2024-11-19 12:02:00.513188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:47.143 [2024-11-19 12:02:00.513196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:47.143 [2024-11-19 12:02:00.513204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:47.143 [2024-11-19 12:02:00.513211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:47.143 [2024-11-19 12:02:00.513220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:47.143 [2024-11-19 12:02:00.513227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:47.143 [2024-11-19 12:02:00.513270] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:47.143 [2024-11-19 12:02:00.513279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:47.143 [2024-11-19 12:02:00.513296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:47.143 [2024-11-19 12:02:00.513303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:47.143 [2024-11-19 12:02:00.513311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:47.143 [2024-11-19 12:02:00.513318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.513326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:47.143 [2024-11-19 12:02:00.513334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:30:47.143 [2024-11-19 12:02:00.513341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.143 [2024-11-19 12:02:00.532389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.532439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:47.143 [2024-11-19 12:02:00.532458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.006 ms 00:30:47.143 [2024-11-19 12:02:00.532466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.143 [2024-11-19 12:02:00.532560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.532589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:47.143 [2024-11-19 12:02:00.532598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:47.143 [2024-11-19 12:02:00.532606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.143 [2024-11-19 12:02:00.544157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.544198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:47.143 [2024-11-19 12:02:00.544214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.492 ms 00:30:47.143 [2024-11-19 12:02:00.544224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.143 [2024-11-19 12:02:00.544267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.544279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:47.143 [2024-11-19 12:02:00.544289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:47.143 [2024-11-19 12:02:00.544299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.143 [2024-11-19 12:02:00.544399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.544430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:47.143 [2024-11-19 12:02:00.544442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:30:47.143 [2024-11-19 12:02:00.544454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.143 [2024-11-19 12:02:00.544608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.143 [2024-11-19 12:02:00.544627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:47.143 [2024-11-19 12:02:00.544642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:30:47.143 [2024-11-19 12:02:00.544651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.551195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.405 [2024-11-19 12:02:00.551236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:47.405 [2024-11-19 12:02:00.551250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.520 ms 00:30:47.405 [2024-11-19 12:02:00.551259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.551370] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:47.405 [2024-11-19 12:02:00.551383] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:47.405 [2024-11-19 12:02:00.551393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.405 [2024-11-19 12:02:00.551402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:47.405 [2024-11-19 12:02:00.551437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:47.405 [2024-11-19 12:02:00.551446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.563738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.405 [2024-11-19 12:02:00.563773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:47.405 [2024-11-19 12:02:00.563784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:30:47.405 [2024-11-19 12:02:00.563791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.563912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.405 [2024-11-19 12:02:00.563922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:47.405 [2024-11-19 12:02:00.563930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:30:47.405 [2024-11-19 12:02:00.563941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.563990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.405 [2024-11-19 12:02:00.564004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:47.405 [2024-11-19 12:02:00.564013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:47.405 [2024-11-19 12:02:00.564023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.564325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.405 [2024-11-19 12:02:00.564343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:47.405 [2024-11-19 12:02:00.564356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:30:47.405 [2024-11-19 12:02:00.564364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.405 [2024-11-19 12:02:00.564380] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:47.405 [2024-11-19 12:02:00.564395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.564404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:47.406 [2024-11-19 12:02:00.564439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:47.406 [2024-11-19 12:02:00.564449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.573425] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:47.406 [2024-11-19 12:02:00.573562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.573573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:47.406 [2024-11-19 12:02:00.573583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.095 ms 00:30:47.406 [2024-11-19 12:02:00.573591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.576063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.576093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:47.406 [2024-11-19 12:02:00.576103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:30:47.406 [2024-11-19 12:02:00.576110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.576164] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:47.406 [2024-11-19 12:02:00.576750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.576771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:47.406 [2024-11-19 12:02:00.576780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:30:47.406 [2024-11-19 12:02:00.576788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.576833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.576843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:47.406 [2024-11-19 12:02:00.576851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:47.406 [2024-11-19 12:02:00.576859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.576898] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:47.406 [2024-11-19 12:02:00.576909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.576917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:47.406 [2024-11-19 12:02:00.576925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:47.406 [2024-11-19 12:02:00.576933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.582323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.582366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:47.406 [2024-11-19 12:02:00.582375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.371 ms 00:30:47.406 [2024-11-19 12:02:00.582383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.582473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.406 [2024-11-19 12:02:00.582484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:47.406 [2024-11-19 12:02:00.582496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:47.406 [2024-11-19 12:02:00.582504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.406 [2024-11-19 12:02:00.583633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 74.617 ms, result 0 00:30:48.791  [2024-11-19T12:02:02.774Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-19T12:02:04.151Z] Copying: 94/1024 [MB] (47 MBps) [2024-11-19T12:02:05.086Z] Copying: 142/1024 [MB] (47 MBps) [2024-11-19T12:02:06.020Z] Copying: 191/1024 [MB] (48 MBps) [2024-11-19T12:02:06.956Z] Copying: 236/1024 [MB] (44 MBps) [2024-11-19T12:02:07.889Z] Copying: 284/1024 [MB] (48 MBps) [2024-11-19T12:02:08.823Z] Copying: 332/1024 [MB] (47 MBps) [2024-11-19T12:02:10.196Z] Copying: 376/1024 [MB] (44 MBps) [2024-11-19T12:02:10.832Z] Copying: 423/1024 [MB] (46 MBps) [2024-11-19T12:02:11.783Z] Copying: 470/1024 [MB] (46 MBps) [2024-11-19T12:02:13.160Z] Copying: 512/1024 [MB] (42 MBps) [2024-11-19T12:02:14.098Z] Copying: 542/1024 [MB] (30 MBps) [2024-11-19T12:02:15.036Z] Copying: 566/1024 [MB] (23 MBps) [2024-11-19T12:02:15.975Z] Copying: 587/1024 [MB] (21 MBps) [2024-11-19T12:02:16.918Z] Copying: 607/1024 [MB] (20 MBps) [2024-11-19T12:02:17.952Z] Copying: 626/1024 [MB] (18 MBps) [2024-11-19T12:02:18.896Z] Copying: 640/1024 [MB] (14 MBps) [2024-11-19T12:02:19.835Z] Copying: 660/1024 [MB] (19 MBps) [2024-11-19T12:02:20.772Z] Copying: 706/1024 [MB] (45 MBps) [2024-11-19T12:02:22.160Z] Copying: 758/1024 [MB] (52 MBps) [2024-11-19T12:02:23.104Z] Copying: 801/1024 [MB] (43 MBps) [2024-11-19T12:02:24.044Z] Copying: 845/1024 [MB] (44 MBps) [2024-11-19T12:02:24.978Z] Copying: 889/1024 [MB] (43 MBps) [2024-11-19T12:02:25.911Z] Copying: 941/1024 [MB] (52 MBps) [2024-11-19T12:02:26.477Z] Copying: 992/1024 [MB] (51 MBps) [2024-11-19T12:02:27.045Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-11-19 12:02:27.029370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.633 [2024-11-19 12:02:27.029449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:13.633 [2024-11-19 12:02:27.029463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:13.633 [2024-11-19 12:02:27.029472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.633 [2024-11-19 12:02:27.029492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:13.633 [2024-11-19 12:02:27.029943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.633 [2024-11-19 12:02:27.029967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:13.633 [2024-11-19 12:02:27.029976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:31:13.633 [2024-11-19 12:02:27.029984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.633 [2024-11-19 12:02:27.030199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.633 [2024-11-19 12:02:27.030220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:13.633 [2024-11-19 12:02:27.030229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:31:13.633 [2024-11-19 12:02:27.030238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.633 [2024-11-19 12:02:27.030268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.633 [2024-11-19 12:02:27.030278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:13.633 [2024-11-19 12:02:27.030287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:13.633 [2024-11-19 12:02:27.030301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.633 [2024-11-19 12:02:27.030356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.633 [2024-11-19 12:02:27.030366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:13.633 [2024-11-19 12:02:27.030375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:13.633 [2024-11-19 12:02:27.030385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.633 [2024-11-19 12:02:27.030399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:13.633 [2024-11-19 12:02:27.030427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:13.633 [2024-11-19 12:02:27.030439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:13.633 [2024-11-19 12:02:27.030703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.030994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:13.634 [2024-11-19 12:02:27.031213] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:13.634 [2024-11-19 12:02:27.031223] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90c3f044-fb37-4a86-8d3e-acf3f349e6a6 00:31:13.634 [2024-11-19 12:02:27.031231] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:13.634 [2024-11-19 12:02:27.031238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:31:13.634 [2024-11-19 12:02:27.031248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:31:13.634 [2024-11-19 12:02:27.031256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:31:13.634 [2024-11-19 12:02:27.031263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:13.634 [2024-11-19 12:02:27.031270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:13.634 [2024-11-19 12:02:27.031279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:13.634 [2024-11-19 12:02:27.031286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:13.634 [2024-11-19 12:02:27.031292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:13.634 [2024-11-19 12:02:27.031298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.634 [2024-11-19 12:02:27.031306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:13.634 [2024-11-19 12:02:27.031313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:31:13.634 [2024-11-19 12:02:27.031320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.634 [2024-11-19 12:02:27.032779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.634 [2024-11-19 12:02:27.032812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:13.634 [2024-11-19 12:02:27.032821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:31:13.634 [2024-11-19 12:02:27.032832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.634 [2024-11-19 12:02:27.032909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:13.634 [2024-11-19 12:02:27.032917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:13.634 [2024-11-19 12:02:27.032924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:13.634 [2024-11-19 12:02:27.032931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.634 [2024-11-19 12:02:27.040007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.635 [2024-11-19 12:02:27.040033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:13.635 [2024-11-19 12:02:27.040049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.635 [2024-11-19 12:02:27.040057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.635 [2024-11-19 12:02:27.040115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.635 [2024-11-19 12:02:27.040123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:13.635 [2024-11-19 12:02:27.040130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.635 [2024-11-19 12:02:27.040140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.635 [2024-11-19 12:02:27.040188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.635 [2024-11-19 12:02:27.040198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:13.635 [2024-11-19 12:02:27.040206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.635 [2024-11-19 12:02:27.040215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.635 [2024-11-19 12:02:27.040230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.635 [2024-11-19 12:02:27.040237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:13.635 [2024-11-19 12:02:27.040245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.635 [2024-11-19 12:02:27.040252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.893 [2024-11-19 12:02:27.049548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.893 [2024-11-19 12:02:27.049589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:13.893 [2024-11-19 12:02:27.049603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.893 [2024-11-19 12:02:27.049610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.893 [2024-11-19 12:02:27.057507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.893 [2024-11-19 12:02:27.057547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:13.893 [2024-11-19 12:02:27.057557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.893 [2024-11-19 12:02:27.057565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.893 [2024-11-19 12:02:27.057595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.893 [2024-11-19 12:02:27.057603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:13.893 [2024-11-19 12:02:27.057611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.893 [2024-11-19 12:02:27.057618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.893 [2024-11-19 12:02:27.057664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.893 [2024-11-19 12:02:27.057673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:13.893 [2024-11-19 12:02:27.057680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.893 [2024-11-19 12:02:27.057687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.893 [2024-11-19 12:02:27.057866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.893 [2024-11-19 12:02:27.057886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:13.893 [2024-11-19 12:02:27.057894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.893 [2024-11-19 12:02:27.057902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.894 [2024-11-19 12:02:27.057927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.894 [2024-11-19 12:02:27.057941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:13.894 [2024-11-19 12:02:27.057952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.894 [2024-11-19 12:02:27.057959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.894 [2024-11-19 12:02:27.057995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.894 [2024-11-19 12:02:27.058004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:13.894 [2024-11-19 12:02:27.058016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.894 [2024-11-19 12:02:27.058023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.894 [2024-11-19 12:02:27.058061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.894 [2024-11-19 12:02:27.058071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:13.894 [2024-11-19 12:02:27.058079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.894 [2024-11-19 12:02:27.058086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.894 [2024-11-19 12:02:27.058196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.796 ms, result 0 00:31:13.894 00:31:13.894 00:31:13.894 12:02:27 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:16.426 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92680 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92680 ']' 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92680 00:31:16.426 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92680) - No such process 00:31:16.426 Process with pid 92680 is not found 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92680 is not found' 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:16.426 Remove shared memory files 00:31:16.426 12:02:29 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_band_md /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_l2p_l1 /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_l2p_l2 /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_l2p_l2_ctx /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_nvc_md /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_p2l_pool /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_sb /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_sb_shm /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_trim_bitmap /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_trim_log /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_trim_md /dev/hugepages/ftl_90c3f044-fb37-4a86-8d3e-acf3f349e6a6_vmap 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:16.427 00:31:16.427 real 3m0.266s 00:31:16.427 user 2m48.327s 00:31:16.427 sys 0m11.968s 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.427 12:02:29 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:16.427 ************************************ 00:31:16.427 END TEST ftl_restore_fast 00:31:16.427 ************************************ 00:31:16.427 12:02:29 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:16.427 12:02:29 ftl -- ftl/ftl.sh@14 -- # killprocess 83747 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@950 -- # '[' -z 83747 ']' 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@954 -- # kill -0 83747 00:31:16.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83747) - No such process 00:31:16.427 Process with pid 83747 is not found 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 83747 is not found' 00:31:16.427 12:02:29 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:16.427 12:02:29 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=94531 00:31:16.427 12:02:29 ftl -- ftl/ftl.sh@20 -- # waitforlisten 94531 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@831 -- # '[' -z 94531 ']' 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.427 12:02:29 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.427 12:02:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:16.427 [2024-11-19 12:02:29.619800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:31:16.427 [2024-11-19 12:02:29.619927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94531 ] 00:31:16.427 [2024-11-19 12:02:29.755536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.427 [2024-11-19 12:02:29.786751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.368 12:02:30 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.368 12:02:30 ftl -- common/autotest_common.sh@864 -- # return 0 00:31:17.368 12:02:30 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:17.368 nvme0n1 00:31:17.368 12:02:30 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:17.368 12:02:30 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:17.368 12:02:30 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:17.629 12:02:30 ftl -- ftl/common.sh@28 -- # stores=e8231b0d-a0ef-4485-b02c-0a678fe3e601 00:31:17.629 12:02:30 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:17.629 12:02:30 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8231b0d-a0ef-4485-b02c-0a678fe3e601 00:31:17.890 12:02:31 ftl -- ftl/ftl.sh@23 -- # killprocess 94531 00:31:17.890 12:02:31 ftl -- common/autotest_common.sh@950 -- # '[' -z 94531 ']' 00:31:17.890 12:02:31 ftl -- common/autotest_common.sh@954 -- # kill -0 94531 00:31:17.890 12:02:31 ftl -- common/autotest_common.sh@955 -- # uname 00:31:17.890 12:02:31 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:17.891 12:02:31 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94531 00:31:17.891 12:02:31 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:17.891 killing process with pid 94531 00:31:17.891 12:02:31 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:17.891 12:02:31 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94531' 00:31:17.891 12:02:31 ftl -- common/autotest_common.sh@969 -- # kill 94531 00:31:17.891 12:02:31 ftl -- common/autotest_common.sh@974 -- # wait 94531 00:31:18.463 12:02:31 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:18.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:18.463 Waiting for block devices as requested 00:31:18.463 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:18.796 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:18.796 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:18.796 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:24.108 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:24.108 12:02:37 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:24.108 Remove shared memory files 00:31:24.108 12:02:37 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:24.108 12:02:37 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:24.109 12:02:37 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:24.109 12:02:37 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:24.109 12:02:37 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:24.109 12:02:37 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:24.109 ************************************ 00:31:24.109 END TEST ftl 00:31:24.109 ************************************ 00:31:24.109 00:31:24.109 real 16m6.795s 00:31:24.109 user 18m0.824s 00:31:24.109 sys 1m16.824s 00:31:24.109 12:02:37 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.109 12:02:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:24.109 12:02:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:24.109 12:02:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:24.109 12:02:37 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:24.109 12:02:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:24.109 12:02:37 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:31:24.109 12:02:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:24.109 12:02:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:24.109 12:02:37 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:31:24.109 12:02:37 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:31:24.109 12:02:37 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:31:24.109 12:02:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:24.109 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:31:24.109 12:02:37 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:31:24.109 12:02:37 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:24.109 12:02:37 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:24.109 12:02:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.051 INFO: APP EXITING 00:31:25.051 INFO: killing all VMs 00:31:25.051 INFO: killing vhost app 00:31:25.051 INFO: EXIT DONE 00:31:25.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:25.574 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:25.574 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:25.574 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:25.574 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:25.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:26.094 Cleaning 00:31:26.094 Removing: /var/run/dpdk/spdk0/config 00:31:26.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:26.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:26.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:26.094 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:26.094 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:26.094 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:26.094 Removing: /var/run/dpdk/spdk0 00:31:26.094 Removing: /var/run/dpdk/spdk_pid69243 00:31:26.094 Removing: /var/run/dpdk/spdk_pid69401 00:31:26.094 Removing: /var/run/dpdk/spdk_pid69597 00:31:26.094 Removing: /var/run/dpdk/spdk_pid69679 00:31:26.355 Removing: /var/run/dpdk/spdk_pid69708 00:31:26.355 Removing: /var/run/dpdk/spdk_pid69819 00:31:26.355 Removing: /var/run/dpdk/spdk_pid69837 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70014 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70088 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70167 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70267 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70348 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70382 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70418 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70489 00:31:26.355 Removing: /var/run/dpdk/spdk_pid70589 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71009 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71061 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71103 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71119 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71177 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71193 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71251 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71267 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71309 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71327 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71369 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71387 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71514 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71545 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71634 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71795 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71857 00:31:26.355 Removing: /var/run/dpdk/spdk_pid71888 00:31:26.355 Removing: /var/run/dpdk/spdk_pid72303 00:31:26.355 Removing: /var/run/dpdk/spdk_pid72390 00:31:26.355 Removing: /var/run/dpdk/spdk_pid72493 00:31:26.355 Removing: /var/run/dpdk/spdk_pid72530 00:31:26.355 Removing: /var/run/dpdk/spdk_pid72555 00:31:26.355 Removing: /var/run/dpdk/spdk_pid72633 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73239 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73270 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73739 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73826 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73936 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73967 00:31:26.355 Removing: /var/run/dpdk/spdk_pid73998 00:31:26.355 Removing: /var/run/dpdk/spdk_pid74018 00:31:26.355 Removing: /var/run/dpdk/spdk_pid75835 00:31:26.355 Removing: /var/run/dpdk/spdk_pid75954 00:31:26.355 Removing: /var/run/dpdk/spdk_pid75964 00:31:26.355 Removing: /var/run/dpdk/spdk_pid75976 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76015 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76019 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76031 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76071 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76075 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76087 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76126 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76130 00:31:26.355 Removing: /var/run/dpdk/spdk_pid76142 00:31:26.355 Removing: /var/run/dpdk/spdk_pid77502 00:31:26.355 Removing: /var/run/dpdk/spdk_pid77588 00:31:26.355 Removing: /var/run/dpdk/spdk_pid78982 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80359 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80420 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80476 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80530 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80607 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80672 00:31:26.355 Removing: /var/run/dpdk/spdk_pid80811 00:31:26.355 Removing: /var/run/dpdk/spdk_pid81159 00:31:26.355 Removing: /var/run/dpdk/spdk_pid81184 00:31:26.355 Removing: /var/run/dpdk/spdk_pid81628 00:31:26.355 Removing: /var/run/dpdk/spdk_pid81804 00:31:26.355 Removing: /var/run/dpdk/spdk_pid81894 00:31:26.355 Removing: /var/run/dpdk/spdk_pid81998 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82041 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82061 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82348 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82386 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82442 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82804 00:31:26.355 Removing: /var/run/dpdk/spdk_pid82948 00:31:26.355 Removing: /var/run/dpdk/spdk_pid83747 00:31:26.355 Removing: /var/run/dpdk/spdk_pid83868 00:31:26.355 Removing: /var/run/dpdk/spdk_pid84023 00:31:26.355 Removing: /var/run/dpdk/spdk_pid84098 00:31:26.355 Removing: /var/run/dpdk/spdk_pid84445 00:31:26.355 Removing: /var/run/dpdk/spdk_pid84710 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85056 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85227 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85390 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85432 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85675 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85702 00:31:26.355 Removing: /var/run/dpdk/spdk_pid85741 00:31:26.355 Removing: /var/run/dpdk/spdk_pid86010 00:31:26.355 Removing: /var/run/dpdk/spdk_pid86235 00:31:26.355 Removing: /var/run/dpdk/spdk_pid86900 00:31:26.355 Removing: /var/run/dpdk/spdk_pid87734 00:31:26.355 Removing: /var/run/dpdk/spdk_pid88444 00:31:26.355 Removing: /var/run/dpdk/spdk_pid89368 00:31:26.355 Removing: /var/run/dpdk/spdk_pid89510 00:31:26.355 Removing: /var/run/dpdk/spdk_pid89599 00:31:26.355 Removing: /var/run/dpdk/spdk_pid89957 00:31:26.355 Removing: /var/run/dpdk/spdk_pid90015 00:31:26.355 Removing: /var/run/dpdk/spdk_pid90435 00:31:26.355 Removing: /var/run/dpdk/spdk_pid90916 00:31:26.355 Removing: /var/run/dpdk/spdk_pid91820 00:31:26.355 Removing: /var/run/dpdk/spdk_pid91937 00:31:26.355 Removing: /var/run/dpdk/spdk_pid91967 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92020 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92065 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92123 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92295 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92345 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92401 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92452 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92475 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92537 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92680 00:31:26.355 Removing: /var/run/dpdk/spdk_pid92876 00:31:26.355 Removing: /var/run/dpdk/spdk_pid93251 00:31:26.355 Removing: /var/run/dpdk/spdk_pid93545 00:31:26.355 Removing: /var/run/dpdk/spdk_pid94204 00:31:26.355 Removing: /var/run/dpdk/spdk_pid94531 00:31:26.355 Clean 00:31:26.616 12:02:39 -- common/autotest_common.sh@1451 -- # return 0 00:31:26.616 12:02:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:31:26.616 12:02:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:26.616 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:31:26.616 12:02:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:31:26.616 12:02:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:26.616 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:31:26.616 12:02:39 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:26.616 12:02:39 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:26.616 12:02:39 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:26.616 12:02:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:31:26.616 12:02:39 -- spdk/autotest.sh@394 -- # hostname 00:31:26.616 12:02:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:26.616 geninfo: WARNING: invalid characters removed from testname! 00:31:53.182 12:03:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:53.182 12:03:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.748 12:03:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:57.223 12:03:10 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:59.760 12:03:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:01.660 12:03:14 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:04.188 12:03:17 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:04.188 12:03:17 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:32:04.188 12:03:17 -- common/autotest_common.sh@1681 -- $ lcov --version 00:32:04.188 12:03:17 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:32:04.188 12:03:17 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:32:04.188 12:03:17 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:32:04.188 12:03:17 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:32:04.188 12:03:17 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:32:04.188 12:03:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:32:04.188 12:03:17 -- scripts/common.sh@336 -- $ read -ra ver1 00:32:04.188 12:03:17 -- scripts/common.sh@337 -- $ IFS=.-: 00:32:04.188 12:03:17 -- scripts/common.sh@337 -- $ read -ra ver2 00:32:04.188 12:03:17 -- scripts/common.sh@338 -- $ local 'op=<' 00:32:04.188 12:03:17 -- scripts/common.sh@340 -- $ ver1_l=2 00:32:04.188 12:03:17 -- scripts/common.sh@341 -- $ ver2_l=1 00:32:04.188 12:03:17 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:32:04.188 12:03:17 -- scripts/common.sh@344 -- $ case "$op" in 00:32:04.188 12:03:17 -- scripts/common.sh@345 -- $ : 1 00:32:04.188 12:03:17 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:32:04.188 12:03:17 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.188 12:03:17 -- scripts/common.sh@365 -- $ decimal 1 00:32:04.188 12:03:17 -- scripts/common.sh@353 -- $ local d=1 00:32:04.188 12:03:17 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:32:04.188 12:03:17 -- scripts/common.sh@355 -- $ echo 1 00:32:04.188 12:03:17 -- scripts/common.sh@365 -- $ ver1[v]=1 00:32:04.188 12:03:17 -- scripts/common.sh@366 -- $ decimal 2 00:32:04.188 12:03:17 -- scripts/common.sh@353 -- $ local d=2 00:32:04.188 12:03:17 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:32:04.188 12:03:17 -- scripts/common.sh@355 -- $ echo 2 00:32:04.188 12:03:17 -- scripts/common.sh@366 -- $ ver2[v]=2 00:32:04.188 12:03:17 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:32:04.188 12:03:17 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:32:04.188 12:03:17 -- scripts/common.sh@368 -- $ return 0 00:32:04.188 12:03:17 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.188 12:03:17 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:32:04.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.188 --rc genhtml_branch_coverage=1 00:32:04.188 --rc genhtml_function_coverage=1 00:32:04.188 --rc genhtml_legend=1 00:32:04.188 --rc geninfo_all_blocks=1 00:32:04.188 --rc geninfo_unexecuted_blocks=1 00:32:04.188 00:32:04.188 ' 00:32:04.188 12:03:17 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:32:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.189 --rc genhtml_branch_coverage=1 00:32:04.189 --rc genhtml_function_coverage=1 00:32:04.189 --rc genhtml_legend=1 00:32:04.189 --rc geninfo_all_blocks=1 00:32:04.189 --rc geninfo_unexecuted_blocks=1 00:32:04.189 00:32:04.189 ' 00:32:04.189 12:03:17 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:32:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.189 --rc genhtml_branch_coverage=1 00:32:04.189 --rc genhtml_function_coverage=1 00:32:04.189 --rc genhtml_legend=1 00:32:04.189 --rc geninfo_all_blocks=1 00:32:04.189 --rc geninfo_unexecuted_blocks=1 00:32:04.189 00:32:04.189 ' 00:32:04.189 12:03:17 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:32:04.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.189 --rc genhtml_branch_coverage=1 00:32:04.189 --rc genhtml_function_coverage=1 00:32:04.189 --rc genhtml_legend=1 00:32:04.189 --rc geninfo_all_blocks=1 00:32:04.189 --rc geninfo_unexecuted_blocks=1 00:32:04.189 00:32:04.189 ' 00:32:04.189 12:03:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:04.189 12:03:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:32:04.189 12:03:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:04.189 12:03:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.189 12:03:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.189 12:03:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.189 12:03:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.189 12:03:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.189 12:03:17 -- paths/export.sh@5 -- $ export PATH 00:32:04.189 12:03:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.189 12:03:17 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:04.189 12:03:17 -- common/autobuild_common.sh@479 -- $ date +%s 00:32:04.189 12:03:17 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732017797.XXXXXX 00:32:04.189 12:03:17 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732017797.AQ8Bz0 00:32:04.189 12:03:17 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:32:04.189 12:03:17 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:32:04.189 12:03:17 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:04.189 12:03:17 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:04.189 12:03:17 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:04.189 12:03:17 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:04.189 12:03:17 -- common/autobuild_common.sh@495 -- $ get_config_params 00:32:04.189 12:03:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:32:04.189 12:03:17 -- common/autotest_common.sh@10 -- $ set +x 00:32:04.189 12:03:17 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:32:04.189 12:03:17 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:32:04.189 12:03:17 -- pm/common@17 -- $ local monitor 00:32:04.189 12:03:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:04.189 12:03:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:04.189 12:03:17 -- pm/common@25 -- $ sleep 1 00:32:04.189 12:03:17 -- pm/common@21 -- $ date +%s 00:32:04.189 12:03:17 -- pm/common@21 -- $ date +%s 00:32:04.189 12:03:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732017797 00:32:04.189 12:03:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732017797 00:32:04.189 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732017797_collect-cpu-load.pm.log 00:32:04.189 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732017797_collect-vmstat.pm.log 00:32:05.123 12:03:18 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:32:05.123 12:03:18 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:32:05.123 12:03:18 -- spdk/autopackage.sh@14 -- $ timing_finish 00:32:05.123 12:03:18 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:05.123 12:03:18 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:05.123 12:03:18 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:05.123 12:03:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:05.123 12:03:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:05.123 12:03:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:05.123 12:03:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:05.123 12:03:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:05.123 12:03:18 -- pm/common@44 -- $ pid=96236 00:32:05.123 12:03:18 -- pm/common@50 -- $ kill -TERM 96236 00:32:05.123 12:03:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:05.123 12:03:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:05.123 12:03:18 -- pm/common@44 -- $ pid=96238 00:32:05.123 12:03:18 -- pm/common@50 -- $ kill -TERM 96238 00:32:05.123 + [[ -n 5752 ]] 00:32:05.123 + sudo kill 5752 00:32:05.132 [Pipeline] } 00:32:05.148 [Pipeline] // timeout 00:32:05.153 [Pipeline] } 00:32:05.177 [Pipeline] // stage 00:32:05.183 [Pipeline] } 00:32:05.198 [Pipeline] // catchError 00:32:05.207 [Pipeline] stage 00:32:05.210 [Pipeline] { (Stop VM) 00:32:05.222 [Pipeline] sh 00:32:05.500 + vagrant halt 00:32:08.029 ==> default: Halting domain... 00:32:13.306 [Pipeline] sh 00:32:13.584 + vagrant destroy -f 00:32:16.115 ==> default: Removing domain... 00:32:16.384 [Pipeline] sh 00:32:16.661 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:16.668 [Pipeline] } 00:32:16.682 [Pipeline] // stage 00:32:16.687 [Pipeline] } 00:32:16.701 [Pipeline] // dir 00:32:16.706 [Pipeline] } 00:32:16.719 [Pipeline] // wrap 00:32:16.725 [Pipeline] } 00:32:16.738 [Pipeline] // catchError 00:32:16.746 [Pipeline] stage 00:32:16.748 [Pipeline] { (Epilogue) 00:32:16.760 [Pipeline] sh 00:32:17.033 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:22.308 [Pipeline] catchError 00:32:22.310 [Pipeline] { 00:32:22.323 [Pipeline] sh 00:32:22.603 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:22.860 Artifacts sizes are good 00:32:22.869 [Pipeline] } 00:32:22.884 [Pipeline] // catchError 00:32:22.895 [Pipeline] archiveArtifacts 00:32:22.903 Archiving artifacts 00:32:23.046 [Pipeline] cleanWs 00:32:23.058 [WS-CLEANUP] Deleting project workspace... 00:32:23.058 [WS-CLEANUP] Deferred wipeout is used... 00:32:23.063 [WS-CLEANUP] done 00:32:23.065 [Pipeline] } 00:32:23.082 [Pipeline] // stage 00:32:23.087 [Pipeline] } 00:32:23.101 [Pipeline] // node 00:32:23.107 [Pipeline] End of Pipeline 00:32:23.154 Finished: SUCCESS